00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 88 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3266 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.097 Fetching changes from the remote Git repository 00:00:00.100 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.117 Using shallow fetch with depth 1 00:00:00.117 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.117 > git --version # timeout=10 00:00:00.137 > git --version # 'git version 2.39.2' 00:00:00.137 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.160 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.160 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.709 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.719 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.730 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.730 > git config core.sparsecheckout # timeout=10 00:00:04.741 > git read-tree -mu HEAD # timeout=10 00:00:04.756 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.772 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.772 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.855 [Pipeline] Start of Pipeline 00:00:04.867 [Pipeline] library 00:00:04.868 Loading library shm_lib@master 00:00:04.869 Library shm_lib@master is cached. Copying from home. 00:00:04.888 [Pipeline] node 00:00:04.896 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.901 [Pipeline] { 00:00:04.913 [Pipeline] catchError 00:00:04.914 [Pipeline] { 00:00:04.929 [Pipeline] wrap 00:00:04.936 [Pipeline] { 00:00:04.943 [Pipeline] stage 00:00:04.945 [Pipeline] { (Prologue) 00:00:05.115 [Pipeline] sh 00:00:05.393 + logger -p user.info -t JENKINS-CI 00:00:05.414 [Pipeline] echo 00:00:05.415 Node: GP11 00:00:05.425 [Pipeline] sh 00:00:05.726 [Pipeline] setCustomBuildProperty 00:00:05.737 [Pipeline] echo 00:00:05.739 Cleanup processes 00:00:05.744 [Pipeline] sh 00:00:06.028 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.028 2550826 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.043 [Pipeline] sh 00:00:06.329 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.330 ++ grep -v 'sudo pgrep' 00:00:06.330 ++ awk '{print $1}' 00:00:06.330 + sudo kill -9 00:00:06.330 + true 00:00:06.346 [Pipeline] cleanWs 00:00:06.359 [WS-CLEANUP] Deleting project workspace... 00:00:06.359 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.366 [WS-CLEANUP] done 00:00:06.371 [Pipeline] setCustomBuildProperty 00:00:06.389 [Pipeline] sh 00:00:06.674 + sudo git config --global --replace-all safe.directory '*' 00:00:06.748 [Pipeline] httpRequest 00:00:06.766 [Pipeline] echo 00:00:06.768 Sorcerer 10.211.164.101 is alive 00:00:06.775 [Pipeline] httpRequest 00:00:06.781 HttpMethod: GET 00:00:06.782 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.784 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.801 Response Code: HTTP/1.1 200 OK 00:00:06.801 Success: Status code 200 is in the accepted range: 200,404 00:00:06.802 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:26.766 [Pipeline] sh 00:00:27.054 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:27.070 [Pipeline] httpRequest 00:00:27.095 [Pipeline] echo 00:00:27.097 Sorcerer 10.211.164.101 is alive 00:00:27.105 [Pipeline] httpRequest 00:00:27.110 HttpMethod: GET 00:00:27.110 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:27.111 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:27.126 Response Code: HTTP/1.1 200 OK 00:00:27.126 Success: Status code 200 is in the accepted range: 200,404 00:00:27.127 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:53.302 [Pipeline] sh 00:00:53.586 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:56.893 [Pipeline] sh 00:00:57.179 + git -C spdk log --oneline -n5 00:00:57.179 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:57.179 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:57.179 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:57.179 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:57.179 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:57.201 [Pipeline] withCredentials 00:00:57.214 > git --version # timeout=10 00:00:57.227 > git --version # 'git version 2.39.2' 00:00:57.246 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:57.248 [Pipeline] { 00:00:57.259 [Pipeline] retry 00:00:57.261 [Pipeline] { 00:00:57.279 [Pipeline] sh 00:00:57.566 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:57.579 [Pipeline] } 00:00:57.603 [Pipeline] // retry 00:00:57.609 [Pipeline] } 00:00:57.630 [Pipeline] // withCredentials 00:00:57.642 [Pipeline] httpRequest 00:00:57.668 [Pipeline] echo 00:00:57.670 Sorcerer 10.211.164.101 is alive 00:00:57.679 [Pipeline] httpRequest 00:00:57.684 HttpMethod: GET 00:00:57.685 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.685 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.698 Response Code: HTTP/1.1 200 OK 00:00:57.699 Success: Status code 200 is in the accepted range: 200,404 00:00:57.699 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:06.906 [Pipeline] sh 00:01:07.192 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:09.149 [Pipeline] sh 00:01:09.428 + git -C dpdk log --oneline -n5 00:01:09.428 caf0f5d395 version: 22.11.4 00:01:09.428 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:09.428 dc9c799c7d vhost: fix missing spinlock unlock 00:01:09.428 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:09.428 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:09.441 [Pipeline] } 00:01:09.460 [Pipeline] // stage 00:01:09.470 [Pipeline] stage 00:01:09.472 [Pipeline] { (Prepare) 00:01:09.491 [Pipeline] writeFile 00:01:09.506 [Pipeline] sh 00:01:09.782 + logger -p user.info -t JENKINS-CI 00:01:09.794 [Pipeline] sh 00:01:10.071 + logger -p user.info -t JENKINS-CI 00:01:10.083 [Pipeline] sh 00:01:10.363 + cat autorun-spdk.conf 00:01:10.363 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.363 SPDK_TEST_NVMF=1 00:01:10.363 SPDK_TEST_NVME_CLI=1 00:01:10.363 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.363 SPDK_TEST_NVMF_NICS=e810 00:01:10.363 SPDK_TEST_VFIOUSER=1 00:01:10.363 SPDK_RUN_UBSAN=1 00:01:10.363 NET_TYPE=phy 00:01:10.363 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:10.363 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:10.370 RUN_NIGHTLY=1 00:01:10.376 [Pipeline] readFile 00:01:10.406 [Pipeline] withEnv 00:01:10.409 [Pipeline] { 00:01:10.424 [Pipeline] sh 00:01:10.707 + set -ex 00:01:10.707 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:10.707 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.707 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.707 ++ SPDK_TEST_NVMF=1 00:01:10.707 ++ SPDK_TEST_NVME_CLI=1 00:01:10.707 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.707 ++ SPDK_TEST_NVMF_NICS=e810 00:01:10.707 ++ SPDK_TEST_VFIOUSER=1 00:01:10.707 ++ SPDK_RUN_UBSAN=1 00:01:10.707 ++ NET_TYPE=phy 00:01:10.707 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:10.707 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:10.707 ++ RUN_NIGHTLY=1 00:01:10.707 + case $SPDK_TEST_NVMF_NICS in 00:01:10.707 + DRIVERS=ice 00:01:10.707 + [[ tcp == \r\d\m\a ]] 00:01:10.707 + [[ -n ice ]] 00:01:10.707 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:10.707 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:10.707 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:10.707 rmmod: ERROR: Module irdma is not currently loaded 00:01:10.707 rmmod: ERROR: Module i40iw is not currently loaded 00:01:10.707 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:10.707 + true 00:01:10.707 + for D in $DRIVERS 00:01:10.707 + sudo modprobe ice 00:01:10.707 + exit 0 00:01:10.717 [Pipeline] } 00:01:10.736 [Pipeline] // withEnv 00:01:10.742 [Pipeline] } 00:01:10.759 [Pipeline] // stage 00:01:10.770 [Pipeline] catchError 00:01:10.772 [Pipeline] { 00:01:10.785 [Pipeline] timeout 00:01:10.785 Timeout set to expire in 50 min 00:01:10.786 [Pipeline] { 00:01:10.800 [Pipeline] stage 00:01:10.802 [Pipeline] { (Tests) 00:01:10.819 [Pipeline] sh 00:01:11.099 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.099 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.099 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.099 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:11.099 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.099 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.099 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:11.099 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.099 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.099 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.099 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:11.099 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.099 + source /etc/os-release 00:01:11.099 ++ NAME='Fedora Linux' 00:01:11.099 ++ VERSION='38 (Cloud Edition)' 00:01:11.099 ++ ID=fedora 00:01:11.099 ++ VERSION_ID=38 00:01:11.099 ++ VERSION_CODENAME= 00:01:11.099 ++ PLATFORM_ID=platform:f38 00:01:11.099 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:11.099 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:11.099 ++ LOGO=fedora-logo-icon 00:01:11.099 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:11.099 ++ HOME_URL=https://fedoraproject.org/ 00:01:11.099 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:11.099 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:11.099 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:11.099 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:11.099 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:11.099 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:11.099 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:11.099 ++ SUPPORT_END=2024-05-14 00:01:11.099 ++ VARIANT='Cloud Edition' 00:01:11.099 ++ VARIANT_ID=cloud 00:01:11.099 + uname -a 00:01:11.099 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:11.099 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:12.033 Hugepages 00:01:12.033 node hugesize free / total 00:01:12.033 node0 1048576kB 0 / 0 00:01:12.033 node0 2048kB 0 / 0 00:01:12.033 node1 1048576kB 0 / 0 00:01:12.033 node1 2048kB 0 / 0 00:01:12.033 00:01:12.033 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.033 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:12.033 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:12.033 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:12.033 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:12.033 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:12.033 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:12.034 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:12.034 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:12.034 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:12.034 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:12.034 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:12.034 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:12.034 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:12.034 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:12.034 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:12.034 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:12.034 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:12.034 + rm -f /tmp/spdk-ld-path 00:01:12.034 + source autorun-spdk.conf 00:01:12.034 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.034 ++ SPDK_TEST_NVMF=1 00:01:12.034 ++ SPDK_TEST_NVME_CLI=1 00:01:12.034 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.034 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.034 ++ SPDK_TEST_VFIOUSER=1 00:01:12.034 ++ SPDK_RUN_UBSAN=1 00:01:12.034 ++ NET_TYPE=phy 00:01:12.034 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:12.034 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:12.034 ++ RUN_NIGHTLY=1 00:01:12.034 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.034 + [[ -n '' ]] 00:01:12.034 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.292 + for M in /var/spdk/build-*-manifest.txt 00:01:12.292 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.293 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.293 + for M in /var/spdk/build-*-manifest.txt 00:01:12.293 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.293 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.293 ++ uname 00:01:12.293 + [[ Linux == \L\i\n\u\x ]] 00:01:12.293 + sudo dmesg -T 00:01:12.293 + sudo dmesg --clear 00:01:12.293 + dmesg_pid=2551534 00:01:12.293 + [[ Fedora Linux == FreeBSD ]] 00:01:12.293 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.293 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.293 + sudo dmesg -Tw 00:01:12.293 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:12.293 + [[ -x /usr/src/fio-static/fio ]] 00:01:12.293 + export FIO_BIN=/usr/src/fio-static/fio 00:01:12.293 + FIO_BIN=/usr/src/fio-static/fio 00:01:12.293 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:12.293 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:12.293 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:12.293 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.293 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.293 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:12.293 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.293 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.293 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.293 Test configuration: 00:01:12.293 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.293 SPDK_TEST_NVMF=1 00:01:12.293 SPDK_TEST_NVME_CLI=1 00:01:12.293 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.293 SPDK_TEST_NVMF_NICS=e810 00:01:12.293 SPDK_TEST_VFIOUSER=1 00:01:12.293 SPDK_RUN_UBSAN=1 00:01:12.293 NET_TYPE=phy 00:01:12.293 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:12.293 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:12.293 RUN_NIGHTLY=1 04:17:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:12.293 04:17:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:12.293 04:17:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:12.293 04:17:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:12.293 04:17:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.293 04:17:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.293 04:17:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.293 04:17:32 -- paths/export.sh@5 -- $ export PATH 00:01:12.293 04:17:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.293 04:17:32 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:12.293 04:17:32 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:12.293 04:17:32 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720923452.XXXXXX 00:01:12.293 04:17:32 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720923452.XrU1Cr 00:01:12.293 04:17:32 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:12.293 04:17:32 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:01:12.293 04:17:32 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:12.293 04:17:32 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:12.293 04:17:32 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:12.293 04:17:32 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:12.293 04:17:32 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:12.293 04:17:32 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:12.293 04:17:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.293 04:17:32 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:12.293 04:17:32 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:12.293 04:17:32 -- pm/common@17 -- $ local monitor 00:01:12.293 04:17:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.293 04:17:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.293 04:17:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.293 04:17:32 -- pm/common@21 -- $ date +%s 00:01:12.293 04:17:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.293 04:17:32 -- pm/common@21 -- $ date +%s 00:01:12.293 04:17:32 -- pm/common@25 -- $ sleep 1 00:01:12.293 04:17:32 -- pm/common@21 -- $ date +%s 00:01:12.293 04:17:32 -- pm/common@21 -- $ date +%s 00:01:12.293 04:17:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720923452 00:01:12.293 04:17:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720923452 00:01:12.293 04:17:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720923452 00:01:12.293 04:17:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720923452 00:01:12.293 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720923452_collect-vmstat.pm.log 00:01:12.293 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720923452_collect-cpu-load.pm.log 00:01:12.293 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720923452_collect-cpu-temp.pm.log 00:01:12.293 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720923452_collect-bmc-pm.bmc.pm.log 00:01:13.234 04:17:33 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:13.234 04:17:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:13.234 04:17:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:13.234 04:17:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.234 04:17:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:13.234 Sun Jul 14 02:17:33 AM UTC 2024 00:01:13.234 04:17:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:13.234 v24.05-13-g5fa2f5086 00:01:13.234 04:17:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:13.234 04:17:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:13.234 04:17:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:13.234 04:17:33 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:13.234 04:17:33 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:13.234 04:17:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.234 ************************************ 00:01:13.234 START TEST ubsan 00:01:13.234 ************************************ 00:01:13.234 04:17:33 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:13.234 using ubsan 00:01:13.234 00:01:13.234 real 0m0.000s 00:01:13.234 user 0m0.000s 00:01:13.234 sys 0m0.000s 00:01:13.234 04:17:33 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:13.234 04:17:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:13.234 ************************************ 00:01:13.234 END TEST ubsan 00:01:13.234 ************************************ 00:01:13.492 04:17:33 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:13.492 04:17:33 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:13.492 04:17:33 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:13.492 04:17:33 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:13.492 04:17:33 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:13.492 04:17:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.492 ************************************ 00:01:13.492 START TEST build_native_dpdk 00:01:13.492 ************************************ 00:01:13.492 04:17:33 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:13.492 04:17:33 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:13.493 caf0f5d395 version: 22.11.4 00:01:13.493 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:13.493 dc9c799c7d vhost: fix missing spinlock unlock 00:01:13.493 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:13.493 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:13.493 04:17:33 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:13.493 patching file config/rte_config.h 00:01:13.493 Hunk #1 succeeded at 60 (offset 1 line). 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:13.493 04:17:33 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:17.720 The Meson build system 00:01:17.720 Version: 1.3.1 00:01:17.720 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:17.720 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:17.720 Build type: native build 00:01:17.720 Program cat found: YES (/usr/bin/cat) 00:01:17.720 Project name: DPDK 00:01:17.720 Project version: 22.11.4 00:01:17.720 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:17.720 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:17.720 Host machine cpu family: x86_64 00:01:17.720 Host machine cpu: x86_64 00:01:17.720 Message: ## Building in Developer Mode ## 00:01:17.720 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:17.720 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:17.720 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:17.720 Program objdump found: YES (/usr/bin/objdump) 00:01:17.720 Program python3 found: YES (/usr/bin/python3) 00:01:17.720 Program cat found: YES (/usr/bin/cat) 00:01:17.720 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:17.720 Checking for size of "void *" : 8 00:01:17.720 Checking for size of "void *" : 8 (cached) 00:01:17.720 Library m found: YES 00:01:17.720 Library numa found: YES 00:01:17.720 Has header "numaif.h" : YES 00:01:17.720 Library fdt found: NO 00:01:17.720 Library execinfo found: NO 00:01:17.720 Has header "execinfo.h" : YES 00:01:17.720 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:17.720 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:17.720 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:17.720 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:17.720 Run-time dependency openssl found: YES 3.0.9 00:01:17.720 Run-time dependency libpcap found: YES 1.10.4 00:01:17.720 Has header "pcap.h" with dependency libpcap: YES 00:01:17.720 Compiler for C supports arguments -Wcast-qual: YES 00:01:17.720 Compiler for C supports arguments -Wdeprecated: YES 00:01:17.720 Compiler for C supports arguments -Wformat: YES 00:01:17.720 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:17.720 Compiler for C supports arguments -Wformat-security: NO 00:01:17.720 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:17.720 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:17.720 Compiler for C supports arguments -Wnested-externs: YES 00:01:17.720 Compiler for C supports arguments -Wold-style-definition: YES 00:01:17.720 Compiler for C supports arguments -Wpointer-arith: YES 00:01:17.720 Compiler for C supports arguments -Wsign-compare: YES 00:01:17.720 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:17.720 Compiler for C supports arguments -Wundef: YES 00:01:17.720 Compiler for C supports arguments -Wwrite-strings: YES 00:01:17.720 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:17.720 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:17.720 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:17.720 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:17.720 Compiler for C supports arguments -mavx512f: YES 00:01:17.720 Checking if "AVX512 checking" compiles: YES 00:01:17.720 Fetching value of define "__SSE4_2__" : 1 00:01:17.720 Fetching value of define "__AES__" : 1 00:01:17.720 Fetching value of define "__AVX__" : 1 00:01:17.720 Fetching value of define "__AVX2__" : (undefined) 00:01:17.720 Fetching value of define "__AVX512BW__" : (undefined) 00:01:17.720 Fetching value of define "__AVX512CD__" : (undefined) 00:01:17.720 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:17.720 Fetching value of define "__AVX512F__" : (undefined) 00:01:17.720 Fetching value of define "__AVX512VL__" : (undefined) 00:01:17.720 Fetching value of define "__PCLMUL__" : 1 00:01:17.720 Fetching value of define "__RDRND__" : 1 00:01:17.720 Fetching value of define "__RDSEED__" : (undefined) 00:01:17.720 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:17.720 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:17.720 Message: lib/kvargs: Defining dependency "kvargs" 00:01:17.720 Message: lib/telemetry: Defining dependency "telemetry" 00:01:17.720 Checking for function "getentropy" : YES 00:01:17.720 Message: lib/eal: Defining dependency "eal" 00:01:17.720 Message: lib/ring: Defining dependency "ring" 00:01:17.720 Message: lib/rcu: Defining dependency "rcu" 00:01:17.720 Message: lib/mempool: Defining dependency "mempool" 00:01:17.720 Message: lib/mbuf: Defining dependency "mbuf" 00:01:17.720 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:17.720 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:17.720 Compiler for C supports arguments -mpclmul: YES 00:01:17.720 Compiler for C supports arguments -maes: YES 00:01:17.720 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:17.720 Compiler for C supports arguments -mavx512bw: YES 00:01:17.720 Compiler for C supports arguments -mavx512dq: YES 00:01:17.720 Compiler for C supports arguments -mavx512vl: YES 00:01:17.720 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:17.720 Compiler for C supports arguments -mavx2: YES 00:01:17.720 Compiler for C supports arguments -mavx: YES 00:01:17.720 Message: lib/net: Defining dependency "net" 00:01:17.720 Message: lib/meter: Defining dependency "meter" 00:01:17.720 Message: lib/ethdev: Defining dependency "ethdev" 00:01:17.720 Message: lib/pci: Defining dependency "pci" 00:01:17.720 Message: lib/cmdline: Defining dependency "cmdline" 00:01:17.720 Message: lib/metrics: Defining dependency "metrics" 00:01:17.720 Message: lib/hash: Defining dependency "hash" 00:01:17.720 Message: lib/timer: Defining dependency "timer" 00:01:17.720 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:17.720 Compiler for C supports arguments -mavx2: YES (cached) 00:01:17.720 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:17.720 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:17.720 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:17.720 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:17.720 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:17.720 Message: lib/acl: Defining dependency "acl" 00:01:17.720 Message: lib/bbdev: Defining dependency "bbdev" 00:01:17.720 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:17.720 Run-time dependency libelf found: YES 0.190 00:01:17.720 Message: lib/bpf: Defining dependency "bpf" 00:01:17.720 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:17.720 Message: lib/compressdev: Defining dependency "compressdev" 00:01:17.720 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:17.720 Message: lib/distributor: Defining dependency "distributor" 00:01:17.720 Message: lib/efd: Defining dependency "efd" 00:01:17.720 Message: lib/eventdev: Defining dependency "eventdev" 00:01:17.720 Message: lib/gpudev: Defining dependency "gpudev" 00:01:17.721 Message: lib/gro: Defining dependency "gro" 00:01:17.721 Message: lib/gso: Defining dependency "gso" 00:01:17.721 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:17.721 Message: lib/jobstats: Defining dependency "jobstats" 00:01:17.721 Message: lib/latencystats: Defining dependency "latencystats" 00:01:17.721 Message: lib/lpm: Defining dependency "lpm" 00:01:17.721 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:17.721 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:17.721 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:17.721 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:17.721 Message: lib/member: Defining dependency "member" 00:01:17.721 Message: lib/pcapng: Defining dependency "pcapng" 00:01:17.721 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:17.721 Message: lib/power: Defining dependency "power" 00:01:17.721 Message: lib/rawdev: Defining dependency "rawdev" 00:01:17.721 Message: lib/regexdev: Defining dependency "regexdev" 00:01:17.721 Message: lib/dmadev: Defining dependency "dmadev" 00:01:17.721 Message: lib/rib: Defining dependency "rib" 00:01:17.721 Message: lib/reorder: Defining dependency "reorder" 00:01:17.721 Message: lib/sched: Defining dependency "sched" 00:01:17.721 Message: lib/security: Defining dependency "security" 00:01:17.721 Message: lib/stack: Defining dependency "stack" 00:01:17.721 Has header "linux/userfaultfd.h" : YES 00:01:17.721 Message: lib/vhost: Defining dependency "vhost" 00:01:17.721 Message: lib/ipsec: Defining dependency "ipsec" 00:01:17.721 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:17.721 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:17.721 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:17.721 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:17.721 Message: lib/fib: Defining dependency "fib" 00:01:17.721 Message: lib/port: Defining dependency "port" 00:01:17.721 Message: lib/pdump: Defining dependency "pdump" 00:01:17.721 Message: lib/table: Defining dependency "table" 00:01:17.721 Message: lib/pipeline: Defining dependency "pipeline" 00:01:17.721 Message: lib/graph: Defining dependency "graph" 00:01:17.721 Message: lib/node: Defining dependency "node" 00:01:17.721 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:17.721 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:17.721 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:17.721 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:17.721 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:17.721 Compiler for C supports arguments -Wno-unused-value: YES 00:01:18.663 Compiler for C supports arguments -Wno-format: YES 00:01:18.663 Compiler for C supports arguments -Wno-format-security: YES 00:01:18.663 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:18.663 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:18.663 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:18.663 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:18.663 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:18.663 Compiler for C supports arguments -mavx2: YES (cached) 00:01:18.663 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:18.663 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:18.663 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:18.663 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:18.663 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:18.663 Program doxygen found: YES (/usr/bin/doxygen) 00:01:18.663 Configuring doxy-api.conf using configuration 00:01:18.663 Program sphinx-build found: NO 00:01:18.663 Configuring rte_build_config.h using configuration 00:01:18.663 Message: 00:01:18.663 ================= 00:01:18.663 Applications Enabled 00:01:18.663 ================= 00:01:18.663 00:01:18.663 apps: 00:01:18.663 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:18.663 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:18.663 test-security-perf, 00:01:18.663 00:01:18.663 Message: 00:01:18.663 ================= 00:01:18.663 Libraries Enabled 00:01:18.663 ================= 00:01:18.664 00:01:18.664 libs: 00:01:18.664 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:18.664 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:18.664 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:18.664 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:18.664 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:18.664 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:18.664 table, pipeline, graph, node, 00:01:18.664 00:01:18.664 Message: 00:01:18.664 =============== 00:01:18.664 Drivers Enabled 00:01:18.664 =============== 00:01:18.664 00:01:18.664 common: 00:01:18.664 00:01:18.664 bus: 00:01:18.664 pci, vdev, 00:01:18.664 mempool: 00:01:18.664 ring, 00:01:18.664 dma: 00:01:18.664 00:01:18.664 net: 00:01:18.664 i40e, 00:01:18.664 raw: 00:01:18.664 00:01:18.664 crypto: 00:01:18.664 00:01:18.664 compress: 00:01:18.664 00:01:18.664 regex: 00:01:18.664 00:01:18.664 vdpa: 00:01:18.664 00:01:18.664 event: 00:01:18.664 00:01:18.664 baseband: 00:01:18.664 00:01:18.664 gpu: 00:01:18.664 00:01:18.664 00:01:18.664 Message: 00:01:18.664 ================= 00:01:18.664 Content Skipped 00:01:18.664 ================= 00:01:18.664 00:01:18.664 apps: 00:01:18.664 00:01:18.664 libs: 00:01:18.664 kni: explicitly disabled via build config (deprecated lib) 00:01:18.664 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:18.664 00:01:18.664 drivers: 00:01:18.664 common/cpt: not in enabled drivers build config 00:01:18.664 common/dpaax: not in enabled drivers build config 00:01:18.664 common/iavf: not in enabled drivers build config 00:01:18.664 common/idpf: not in enabled drivers build config 00:01:18.664 common/mvep: not in enabled drivers build config 00:01:18.664 common/octeontx: not in enabled drivers build config 00:01:18.664 bus/auxiliary: not in enabled drivers build config 00:01:18.664 bus/dpaa: not in enabled drivers build config 00:01:18.664 bus/fslmc: not in enabled drivers build config 00:01:18.664 bus/ifpga: not in enabled drivers build config 00:01:18.664 bus/vmbus: not in enabled drivers build config 00:01:18.664 common/cnxk: not in enabled drivers build config 00:01:18.664 common/mlx5: not in enabled drivers build config 00:01:18.664 common/qat: not in enabled drivers build config 00:01:18.664 common/sfc_efx: not in enabled drivers build config 00:01:18.664 mempool/bucket: not in enabled drivers build config 00:01:18.664 mempool/cnxk: not in enabled drivers build config 00:01:18.664 mempool/dpaa: not in enabled drivers build config 00:01:18.664 mempool/dpaa2: not in enabled drivers build config 00:01:18.664 mempool/octeontx: not in enabled drivers build config 00:01:18.664 mempool/stack: not in enabled drivers build config 00:01:18.664 dma/cnxk: not in enabled drivers build config 00:01:18.664 dma/dpaa: not in enabled drivers build config 00:01:18.664 dma/dpaa2: not in enabled drivers build config 00:01:18.664 dma/hisilicon: not in enabled drivers build config 00:01:18.664 dma/idxd: not in enabled drivers build config 00:01:18.664 dma/ioat: not in enabled drivers build config 00:01:18.664 dma/skeleton: not in enabled drivers build config 00:01:18.664 net/af_packet: not in enabled drivers build config 00:01:18.664 net/af_xdp: not in enabled drivers build config 00:01:18.664 net/ark: not in enabled drivers build config 00:01:18.664 net/atlantic: not in enabled drivers build config 00:01:18.664 net/avp: not in enabled drivers build config 00:01:18.664 net/axgbe: not in enabled drivers build config 00:01:18.664 net/bnx2x: not in enabled drivers build config 00:01:18.664 net/bnxt: not in enabled drivers build config 00:01:18.664 net/bonding: not in enabled drivers build config 00:01:18.664 net/cnxk: not in enabled drivers build config 00:01:18.664 net/cxgbe: not in enabled drivers build config 00:01:18.664 net/dpaa: not in enabled drivers build config 00:01:18.664 net/dpaa2: not in enabled drivers build config 00:01:18.664 net/e1000: not in enabled drivers build config 00:01:18.664 net/ena: not in enabled drivers build config 00:01:18.664 net/enetc: not in enabled drivers build config 00:01:18.664 net/enetfec: not in enabled drivers build config 00:01:18.664 net/enic: not in enabled drivers build config 00:01:18.664 net/failsafe: not in enabled drivers build config 00:01:18.664 net/fm10k: not in enabled drivers build config 00:01:18.664 net/gve: not in enabled drivers build config 00:01:18.664 net/hinic: not in enabled drivers build config 00:01:18.664 net/hns3: not in enabled drivers build config 00:01:18.664 net/iavf: not in enabled drivers build config 00:01:18.664 net/ice: not in enabled drivers build config 00:01:18.664 net/idpf: not in enabled drivers build config 00:01:18.664 net/igc: not in enabled drivers build config 00:01:18.664 net/ionic: not in enabled drivers build config 00:01:18.664 net/ipn3ke: not in enabled drivers build config 00:01:18.664 net/ixgbe: not in enabled drivers build config 00:01:18.664 net/kni: not in enabled drivers build config 00:01:18.664 net/liquidio: not in enabled drivers build config 00:01:18.664 net/mana: not in enabled drivers build config 00:01:18.664 net/memif: not in enabled drivers build config 00:01:18.664 net/mlx4: not in enabled drivers build config 00:01:18.664 net/mlx5: not in enabled drivers build config 00:01:18.664 net/mvneta: not in enabled drivers build config 00:01:18.664 net/mvpp2: not in enabled drivers build config 00:01:18.664 net/netvsc: not in enabled drivers build config 00:01:18.664 net/nfb: not in enabled drivers build config 00:01:18.664 net/nfp: not in enabled drivers build config 00:01:18.664 net/ngbe: not in enabled drivers build config 00:01:18.664 net/null: not in enabled drivers build config 00:01:18.664 net/octeontx: not in enabled drivers build config 00:01:18.664 net/octeon_ep: not in enabled drivers build config 00:01:18.664 net/pcap: not in enabled drivers build config 00:01:18.664 net/pfe: not in enabled drivers build config 00:01:18.664 net/qede: not in enabled drivers build config 00:01:18.664 net/ring: not in enabled drivers build config 00:01:18.664 net/sfc: not in enabled drivers build config 00:01:18.664 net/softnic: not in enabled drivers build config 00:01:18.664 net/tap: not in enabled drivers build config 00:01:18.664 net/thunderx: not in enabled drivers build config 00:01:18.664 net/txgbe: not in enabled drivers build config 00:01:18.664 net/vdev_netvsc: not in enabled drivers build config 00:01:18.664 net/vhost: not in enabled drivers build config 00:01:18.664 net/virtio: not in enabled drivers build config 00:01:18.664 net/vmxnet3: not in enabled drivers build config 00:01:18.664 raw/cnxk_bphy: not in enabled drivers build config 00:01:18.664 raw/cnxk_gpio: not in enabled drivers build config 00:01:18.664 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:18.664 raw/ifpga: not in enabled drivers build config 00:01:18.664 raw/ntb: not in enabled drivers build config 00:01:18.664 raw/skeleton: not in enabled drivers build config 00:01:18.664 crypto/armv8: not in enabled drivers build config 00:01:18.664 crypto/bcmfs: not in enabled drivers build config 00:01:18.664 crypto/caam_jr: not in enabled drivers build config 00:01:18.664 crypto/ccp: not in enabled drivers build config 00:01:18.664 crypto/cnxk: not in enabled drivers build config 00:01:18.664 crypto/dpaa_sec: not in enabled drivers build config 00:01:18.664 crypto/dpaa2_sec: not in enabled drivers build config 00:01:18.664 crypto/ipsec_mb: not in enabled drivers build config 00:01:18.664 crypto/mlx5: not in enabled drivers build config 00:01:18.664 crypto/mvsam: not in enabled drivers build config 00:01:18.664 crypto/nitrox: not in enabled drivers build config 00:01:18.664 crypto/null: not in enabled drivers build config 00:01:18.664 crypto/octeontx: not in enabled drivers build config 00:01:18.664 crypto/openssl: not in enabled drivers build config 00:01:18.664 crypto/scheduler: not in enabled drivers build config 00:01:18.664 crypto/uadk: not in enabled drivers build config 00:01:18.664 crypto/virtio: not in enabled drivers build config 00:01:18.664 compress/isal: not in enabled drivers build config 00:01:18.664 compress/mlx5: not in enabled drivers build config 00:01:18.664 compress/octeontx: not in enabled drivers build config 00:01:18.664 compress/zlib: not in enabled drivers build config 00:01:18.664 regex/mlx5: not in enabled drivers build config 00:01:18.664 regex/cn9k: not in enabled drivers build config 00:01:18.664 vdpa/ifc: not in enabled drivers build config 00:01:18.664 vdpa/mlx5: not in enabled drivers build config 00:01:18.664 vdpa/sfc: not in enabled drivers build config 00:01:18.664 event/cnxk: not in enabled drivers build config 00:01:18.664 event/dlb2: not in enabled drivers build config 00:01:18.664 event/dpaa: not in enabled drivers build config 00:01:18.664 event/dpaa2: not in enabled drivers build config 00:01:18.664 event/dsw: not in enabled drivers build config 00:01:18.664 event/opdl: not in enabled drivers build config 00:01:18.664 event/skeleton: not in enabled drivers build config 00:01:18.664 event/sw: not in enabled drivers build config 00:01:18.664 event/octeontx: not in enabled drivers build config 00:01:18.664 baseband/acc: not in enabled drivers build config 00:01:18.664 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:18.664 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:18.664 baseband/la12xx: not in enabled drivers build config 00:01:18.664 baseband/null: not in enabled drivers build config 00:01:18.664 baseband/turbo_sw: not in enabled drivers build config 00:01:18.664 gpu/cuda: not in enabled drivers build config 00:01:18.664 00:01:18.664 00:01:18.664 Build targets in project: 316 00:01:18.664 00:01:18.664 DPDK 22.11.4 00:01:18.664 00:01:18.664 User defined options 00:01:18.664 libdir : lib 00:01:18.664 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.664 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:18.664 c_link_args : 00:01:18.664 enable_docs : false 00:01:18.664 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:18.664 enable_kmods : false 00:01:18.664 machine : native 00:01:18.664 tests : false 00:01:18.664 00:01:18.664 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:18.664 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:18.664 04:17:38 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:18.664 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:18.664 [1/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:18.664 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:18.664 [3/745] Generating lib/rte_kvargs_def with a custom command 00:01:18.665 [4/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:18.665 [5/745] Generating lib/rte_telemetry_def with a custom command 00:01:18.665 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:18.665 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:18.665 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:18.665 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:18.665 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:18.665 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:18.665 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:18.665 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:18.665 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:18.665 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:18.665 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:18.665 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:18.931 [18/745] Linking static target lib/librte_kvargs.a 00:01:18.931 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:18.931 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:18.931 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:18.931 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:18.931 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:18.931 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:18.931 [25/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:18.931 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:18.931 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:18.931 [28/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:18.931 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:18.931 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:18.931 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:18.931 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:18.931 [33/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:18.931 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:18.931 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:18.931 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:18.931 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:18.931 [38/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:18.931 [39/745] Generating lib/rte_eal_def with a custom command 00:01:18.931 [40/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:18.931 [41/745] Generating lib/rte_eal_mingw with a custom command 00:01:18.931 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:18.931 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:18.931 [44/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:18.931 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:18.931 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:18.931 [47/745] Generating lib/rte_ring_def with a custom command 00:01:18.931 [48/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:18.931 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:18.931 [50/745] Generating lib/rte_ring_mingw with a custom command 00:01:18.931 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:18.931 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:18.931 [53/745] Generating lib/rte_rcu_def with a custom command 00:01:18.931 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:18.931 [55/745] Generating lib/rte_mempool_def with a custom command 00:01:18.931 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:18.931 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:18.931 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:01:18.931 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:18.931 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:18.931 [61/745] Generating lib/rte_mbuf_def with a custom command 00:01:18.931 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:18.931 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:18.931 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:18.931 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:18.931 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:19.193 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:19.194 [68/745] Generating lib/rte_meter_def with a custom command 00:01:19.194 [69/745] Generating lib/rte_meter_mingw with a custom command 00:01:19.194 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:19.194 [71/745] Generating lib/rte_net_def with a custom command 00:01:19.194 [72/745] Generating lib/rte_net_mingw with a custom command 00:01:19.194 [73/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:19.194 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:19.194 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:19.194 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:19.194 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:19.194 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:19.194 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.194 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:19.194 [81/745] Linking static target lib/librte_ring.a 00:01:19.194 [82/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:19.194 [83/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:19.194 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:19.194 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:19.194 [86/745] Linking static target lib/librte_meter.a 00:01:19.454 [87/745] Generating lib/rte_pci_def with a custom command 00:01:19.454 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:19.454 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:19.454 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:19.454 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:19.454 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:19.454 [93/745] Linking static target lib/librte_pci.a 00:01:19.454 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:19.454 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:19.454 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:19.454 [97/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:19.455 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:19.722 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.722 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:19.722 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.722 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:19.722 [103/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:19.722 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:19.722 [105/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:19.722 [106/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.722 [107/745] Linking static target lib/librte_telemetry.a 00:01:19.722 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:19.722 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:19.722 [110/745] Generating lib/rte_cmdline_def with a custom command 00:01:19.722 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:19.722 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:19.722 [113/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:19.722 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:19.722 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:19.722 [116/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:19.722 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:19.984 [118/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:19.984 [119/745] Generating lib/rte_hash_def with a custom command 00:01:19.984 [120/745] Generating lib/rte_hash_mingw with a custom command 00:01:19.984 [121/745] Generating lib/rte_timer_def with a custom command 00:01:19.984 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:19.984 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:19.984 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:19.984 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:20.245 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:20.245 [127/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:20.245 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:20.245 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:20.245 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:20.245 [131/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:20.245 [132/745] Generating lib/rte_acl_def with a custom command 00:01:20.245 [133/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:20.245 [134/745] Generating lib/rte_acl_mingw with a custom command 00:01:20.245 [135/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:20.245 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:20.245 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:20.245 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:01:20.245 [139/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:20.245 [140/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:20.245 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:20.245 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:20.246 [143/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.506 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:20.506 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:20.506 [146/745] Linking target lib/librte_telemetry.so.23.0 00:01:20.506 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:20.506 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:20.506 [149/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:20.506 [150/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:20.506 [151/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:20.506 [152/745] Generating lib/rte_bpf_def with a custom command 00:01:20.506 [153/745] Generating lib/rte_bpf_mingw with a custom command 00:01:20.506 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:20.506 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:20.506 [156/745] Generating lib/rte_cfgfile_def with a custom command 00:01:20.506 [157/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:20.506 [158/745] Generating lib/rte_compressdev_def with a custom command 00:01:20.506 [159/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:20.506 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:20.769 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:20.770 [162/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:20.770 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:01:20.770 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:20.770 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:20.770 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:20.770 [167/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:20.770 [168/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:20.770 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:20.770 [170/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:20.770 [171/745] Linking static target lib/librte_timer.a 00:01:20.770 [172/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:20.770 [173/745] Linking static target lib/librte_cmdline.a 00:01:20.770 [174/745] Generating lib/rte_distributor_mingw with a custom command 00:01:20.770 [175/745] Generating lib/rte_distributor_def with a custom command 00:01:20.770 [176/745] Linking static target lib/librte_rcu.a 00:01:20.770 [177/745] Generating lib/rte_efd_def with a custom command 00:01:20.770 [178/745] Generating lib/rte_efd_mingw with a custom command 00:01:20.770 [179/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:20.770 [180/745] Linking static target lib/librte_net.a 00:01:21.058 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:21.058 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:21.058 [183/745] Linking static target lib/librte_mempool.a 00:01:21.058 [184/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:21.058 [185/745] Linking static target lib/librte_metrics.a 00:01:21.058 [186/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:21.058 [187/745] Linking static target lib/librte_cfgfile.a 00:01:21.325 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:21.325 [189/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.325 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.325 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.325 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:21.325 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:21.325 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:21.325 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:21.325 [196/745] Linking static target lib/librte_eal.a 00:01:21.325 [197/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:21.325 [198/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:21.325 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:21.587 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:21.587 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:21.587 [202/745] Generating lib/rte_gpudev_def with a custom command 00:01:21.587 [203/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:21.587 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:21.587 [205/745] Linking static target lib/librte_bitratestats.a 00:01:21.587 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:21.587 [207/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.587 [208/745] Generating lib/rte_gro_def with a custom command 00:01:21.587 [209/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.587 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:21.852 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:21.852 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:21.852 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:21.852 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:21.852 [215/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.853 [216/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:21.853 [217/745] Generating lib/rte_gso_mingw with a custom command 00:01:21.853 [218/745] Generating lib/rte_gso_def with a custom command 00:01:21.853 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:21.853 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:22.117 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:22.117 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:22.117 [223/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:22.117 [224/745] Generating lib/rte_ip_frag_def with a custom command 00:01:22.117 [225/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:22.117 [226/745] Linking static target lib/librte_bbdev.a 00:01:22.117 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:22.117 [228/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.117 [229/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.117 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:22.117 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:22.379 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:22.379 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:22.379 [234/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:22.379 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:22.379 [236/745] Generating lib/rte_lpm_def with a custom command 00:01:22.379 [237/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:22.379 [238/745] Generating lib/rte_lpm_mingw with a custom command 00:01:22.379 [239/745] Linking static target lib/librte_compressdev.a 00:01:22.379 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:22.379 [241/745] Linking static target lib/librte_jobstats.a 00:01:22.379 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:22.640 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:22.640 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:22.640 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:22.640 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:22.908 [247/745] Generating lib/rte_member_def with a custom command 00:01:22.908 [248/745] Linking static target lib/librte_distributor.a 00:01:22.908 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:22.908 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.908 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:22.908 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:22.908 [253/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:22.908 [254/745] Generating lib/rte_pcapng_def with a custom command 00:01:22.908 [255/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.171 [256/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:23.171 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:23.171 [258/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:23.171 [259/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:23.171 [260/745] Linking static target lib/librte_bpf.a 00:01:23.171 [261/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:23.171 [262/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:23.171 [263/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:23.171 [264/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:23.171 [265/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:23.171 [266/745] Generating lib/rte_power_def with a custom command 00:01:23.171 [267/745] Generating lib/rte_power_mingw with a custom command 00:01:23.171 [268/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:23.171 [269/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:23.171 [270/745] Linking static target lib/librte_gpudev.a 00:01:23.171 [271/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:23.171 [272/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.171 [273/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:23.171 [274/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:23.171 [275/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:23.171 [276/745] Linking static target lib/librte_gro.a 00:01:23.171 [277/745] Generating lib/rte_rawdev_def with a custom command 00:01:23.171 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:23.171 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:23.433 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:23.433 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:23.433 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:23.433 [283/745] Generating lib/rte_rib_def with a custom command 00:01:23.433 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:23.433 [285/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:23.433 [286/745] Generating lib/rte_reorder_def with a custom command 00:01:23.433 [287/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:23.433 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:01:23.693 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.694 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.694 [291/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:23.694 [292/745] Generating lib/rte_sched_def with a custom command 00:01:23.694 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:23.694 [294/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:23.694 [295/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.694 [296/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:23.694 [297/745] Generating lib/rte_sched_mingw with a custom command 00:01:23.694 [298/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:23.694 [299/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:23.694 [300/745] Generating lib/rte_security_def with a custom command 00:01:23.694 [301/745] Generating lib/rte_security_mingw with a custom command 00:01:23.694 [302/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:23.694 [303/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:23.694 [304/745] Linking static target lib/librte_latencystats.a 00:01:23.694 [305/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:23.694 [306/745] Generating lib/rte_stack_mingw with a custom command 00:01:23.694 [307/745] Generating lib/rte_stack_def with a custom command 00:01:23.694 [308/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:23.958 [309/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:23.958 [310/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:23.958 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:23.958 [312/745] Linking static target lib/librte_rawdev.a 00:01:23.958 [313/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:23.958 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:23.958 [315/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:23.958 [316/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:23.958 [317/745] Generating lib/rte_vhost_def with a custom command 00:01:23.958 [318/745] Linking static target lib/librte_stack.a 00:01:23.958 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:23.958 [320/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:23.958 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:23.958 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:23.958 [323/745] Linking static target lib/librte_dmadev.a 00:01:24.232 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:24.232 [325/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.232 [326/745] Linking static target lib/librte_ip_frag.a 00:01:24.232 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:24.232 [328/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:24.232 [329/745] Generating lib/rte_ipsec_def with a custom command 00:01:24.232 [330/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.232 [331/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:24.232 [332/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:24.494 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:24.494 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.494 [335/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:24.756 [336/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.756 [337/745] Generating lib/rte_fib_def with a custom command 00:01:24.756 [338/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.756 [339/745] Generating lib/rte_fib_mingw with a custom command 00:01:24.756 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:24.756 [341/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:24.756 [342/745] Linking static target lib/librte_regexdev.a 00:01:24.756 [343/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:24.756 [344/745] Linking static target lib/librte_gso.a 00:01:24.756 [345/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:24.756 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.017 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:25.017 [348/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:25.017 [349/745] Linking static target lib/librte_efd.a 00:01:25.017 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.017 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:25.280 [352/745] Linking static target lib/librte_pcapng.a 00:01:25.280 [353/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:25.280 [354/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:25.280 [355/745] Linking static target lib/librte_lpm.a 00:01:25.280 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:25.280 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:25.280 [358/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:25.280 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:25.280 [360/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:25.280 [361/745] Linking static target lib/librte_reorder.a 00:01:25.549 [362/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:25.549 [363/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.549 [364/745] Generating lib/rte_port_def with a custom command 00:01:25.549 [365/745] Generating lib/rte_port_mingw with a custom command 00:01:25.549 [366/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:25.549 [367/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:25.549 [368/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:25.549 [369/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:25.549 [370/745] Generating lib/rte_pdump_def with a custom command 00:01:25.549 [371/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:25.549 [372/745] Generating lib/rte_pdump_mingw with a custom command 00:01:25.549 [373/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:25.549 [374/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:25.549 [375/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.811 [376/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:25.811 [377/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:25.811 [378/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:25.811 [379/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:25.811 [380/745] Linking static target lib/acl/libavx2_tmp.a 00:01:25.811 [381/745] Linking static target lib/librte_security.a 00:01:25.811 [382/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.811 [383/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:25.811 [384/745] Linking static target lib/librte_power.a 00:01:25.811 [385/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.811 [386/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.811 [387/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:26.076 [388/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:26.077 [389/745] Linking static target lib/librte_rib.a 00:01:26.077 [390/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:26.077 [391/745] Linking static target lib/librte_hash.a 00:01:26.077 [392/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:26.077 [393/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:26.077 [394/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:26.077 [395/745] Linking static target lib/acl/libavx512_tmp.a 00:01:26.077 [396/745] Linking static target lib/librte_acl.a 00:01:26.343 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:26.343 [398/745] Generating lib/rte_table_def with a custom command 00:01:26.343 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:26.343 [400/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:26.343 [401/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.343 [402/745] Linking static target lib/librte_ethdev.a 00:01:26.603 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:26.603 [404/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.603 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.871 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:26.871 [407/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:26.871 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:26.871 [409/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:26.871 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:26.871 [411/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:26.871 [412/745] Linking static target lib/librte_mbuf.a 00:01:26.871 [413/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:26.871 [414/745] Generating lib/rte_pipeline_def with a custom command 00:01:26.871 [415/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:26.871 [416/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:27.129 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:27.129 [418/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:27.129 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:27.129 [420/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:27.129 [421/745] Linking static target lib/librte_fib.a 00:01:27.129 [422/745] Generating lib/rte_graph_mingw with a custom command 00:01:27.129 [423/745] Generating lib/rte_graph_def with a custom command 00:01:27.129 [424/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:27.129 [425/745] Linking static target lib/librte_eventdev.a 00:01:27.129 [426/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.390 [427/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:27.390 [428/745] Linking static target lib/librte_member.a 00:01:27.390 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:27.390 [430/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.390 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:27.390 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:27.390 [433/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:27.390 [434/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:27.390 [435/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:27.390 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:27.390 [437/745] Generating lib/rte_node_def with a custom command 00:01:27.390 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:27.390 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:27.654 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.654 [441/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:27.654 [442/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:27.654 [443/745] Linking static target lib/librte_sched.a 00:01:27.654 [444/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:27.654 [445/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:27.654 [446/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:27.654 [447/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.654 [448/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:27.920 [449/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.920 [450/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:27.920 [451/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:27.920 [452/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:27.920 [453/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:27.920 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:27.920 [455/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:27.920 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:27.920 [457/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:27.920 [458/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:27.920 [459/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:27.920 [460/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:27.920 [461/745] Linking static target lib/librte_cryptodev.a 00:01:28.181 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:28.181 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:28.181 [464/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:28.181 [465/745] Linking static target lib/librte_pdump.a 00:01:28.181 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:28.181 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:28.181 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:28.181 [469/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:28.181 [470/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:28.181 [471/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:28.181 [472/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:28.181 [473/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:28.445 [474/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:28.445 [475/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.445 [476/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:28.445 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:28.445 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:28.445 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:28.445 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:28.445 [481/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:28.445 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:28.707 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.707 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:28.707 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:28.707 [486/745] Linking static target drivers/librte_bus_vdev.a 00:01:28.707 [487/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:28.707 [488/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:28.707 [489/745] Linking static target lib/librte_ipsec.a 00:01:28.707 [490/745] Linking static target lib/librte_table.a 00:01:28.707 [491/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:28.970 [492/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:28.970 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:28.970 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:29.233 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.233 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:29.233 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:29.233 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:29.233 [499/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:29.233 [500/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:29.233 [501/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.233 [502/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:29.233 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:29.233 [504/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:29.498 [505/745] Linking static target lib/librte_graph.a 00:01:29.498 [506/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:29.498 [507/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:29.498 [508/745] Linking static target drivers/librte_bus_pci.a 00:01:29.498 [509/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:29.498 [510/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:29.498 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:29.762 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:29.762 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:29.762 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.027 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:30.027 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.027 [517/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.291 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:30.291 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:30.291 [520/745] Linking static target lib/librte_port.a 00:01:30.291 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:30.291 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:30.291 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:30.562 [524/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:30.562 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:30.562 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:30.850 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.850 [528/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:30.850 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:30.850 [530/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:30.850 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:30.850 [532/745] Linking static target drivers/librte_mempool_ring.a 00:01:30.850 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:30.850 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:30.850 [535/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:31.150 [536/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:31.150 [537/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:31.150 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:31.419 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:31.419 [540/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.419 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.684 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:31.684 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:31.684 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:31.684 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:31.684 [546/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:31.945 [547/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:31.945 [548/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:31.945 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:31.945 [550/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:31.945 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:32.208 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:32.208 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:32.471 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:32.471 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:32.471 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:32.733 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:32.733 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:32.994 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:32.994 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:32.994 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:33.255 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:33.255 [563/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:33.255 [564/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:33.255 [565/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:33.255 [566/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:33.255 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:33.255 [568/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:33.255 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:33.255 [570/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:33.517 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:33.517 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:33.517 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:33.781 [574/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:33.781 [575/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:34.044 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:34.044 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:34.044 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:34.044 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:34.044 [580/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.044 [581/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:34.044 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:34.044 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:34.044 [584/745] Linking target lib/librte_eal.so.23.0 00:01:34.307 [585/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:34.307 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:34.307 [587/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:34.307 [588/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:34.572 [589/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.572 [590/745] Linking target lib/librte_ring.so.23.0 00:01:34.572 [591/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:34.572 [592/745] Linking target lib/librte_meter.so.23.0 00:01:34.572 [593/745] Linking target lib/librte_pci.so.23.0 00:01:34.833 [594/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:34.833 [595/745] Linking target lib/librte_rcu.so.23.0 00:01:34.833 [596/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:34.833 [597/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:34.833 [598/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:34.833 [599/745] Linking target lib/librte_mempool.so.23.0 00:01:34.833 [600/745] Linking target lib/librte_timer.so.23.0 00:01:35.101 [601/745] Linking target lib/librte_acl.so.23.0 00:01:35.101 [602/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:35.101 [603/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:35.101 [604/745] Linking target lib/librte_cfgfile.so.23.0 00:01:35.101 [605/745] Linking target lib/librte_jobstats.so.23.0 00:01:35.101 [606/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:35.101 [607/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:35.102 [608/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:35.102 [609/745] Linking target lib/librte_rawdev.so.23.0 00:01:35.102 [610/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:35.102 [611/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:35.102 [612/745] Linking target lib/librte_stack.so.23.0 00:01:35.102 [613/745] Linking target lib/librte_dmadev.so.23.0 00:01:35.102 [614/745] Linking target lib/librte_graph.so.23.0 00:01:35.102 [615/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:35.102 [616/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:35.102 [617/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:35.102 [618/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:35.102 [619/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:35.361 [620/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:35.361 [621/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:35.361 [622/745] Linking target lib/librte_mbuf.so.23.0 00:01:35.361 [623/745] Linking target lib/librte_rib.so.23.0 00:01:35.361 [624/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:35.361 [625/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:35.361 [626/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:35.361 [627/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:35.361 [628/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:35.361 [629/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:35.361 [630/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:35.361 [631/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:35.361 [632/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:35.361 [633/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:35.361 [634/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:35.361 [635/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:35.620 [636/745] Linking target lib/librte_gpudev.so.23.0 00:01:35.620 [637/745] Linking target lib/librte_reorder.so.23.0 00:01:35.620 [638/745] Linking target lib/librte_bbdev.so.23.0 00:01:35.620 [639/745] Linking target lib/librte_distributor.so.23.0 00:01:35.620 [640/745] Linking target lib/librte_net.so.23.0 00:01:35.620 [641/745] Linking target lib/librte_regexdev.so.23.0 00:01:35.620 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:35.620 [643/745] Linking target lib/librte_compressdev.so.23.0 00:01:35.620 [644/745] Linking target lib/librte_fib.so.23.0 00:01:35.620 [645/745] Linking target lib/librte_sched.so.23.0 00:01:35.620 [646/745] Linking target lib/librte_cryptodev.so.23.0 00:01:35.620 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:35.620 [648/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:35.620 [649/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:35.620 [650/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:35.620 [651/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:35.620 [652/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:35.620 [653/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:35.620 [654/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:35.879 [655/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:35.879 [656/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:35.879 [657/745] Linking target lib/librte_security.so.23.0 00:01:35.879 [658/745] Linking target lib/librte_cmdline.so.23.0 00:01:35.879 [659/745] Linking target lib/librte_ethdev.so.23.0 00:01:35.879 [660/745] Linking target lib/librte_hash.so.23.0 00:01:35.879 [661/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:35.879 [662/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:35.879 [663/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:35.879 [664/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:35.879 [665/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:35.879 [666/745] Linking target lib/librte_metrics.so.23.0 00:01:35.879 [667/745] Linking target lib/librte_pcapng.so.23.0 00:01:35.879 [668/745] Linking target lib/librte_member.so.23.0 00:01:36.138 [669/745] Linking target lib/librte_lpm.so.23.0 00:01:36.138 [670/745] Linking target lib/librte_efd.so.23.0 00:01:36.138 [671/745] Linking target lib/librte_gso.so.23.0 00:01:36.138 [672/745] Linking target lib/librte_ip_frag.so.23.0 00:01:36.138 [673/745] Linking target lib/librte_gro.so.23.0 00:01:36.138 [674/745] Linking target lib/librte_bpf.so.23.0 00:01:36.138 [675/745] Linking target lib/librte_ipsec.so.23.0 00:01:36.138 [676/745] Linking target lib/librte_power.so.23.0 00:01:36.138 [677/745] Linking target lib/librte_eventdev.so.23.0 00:01:36.138 [678/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:36.138 [679/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:36.138 [680/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:36.138 [681/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:36.138 [682/745] Linking target lib/librte_latencystats.so.23.0 00:01:36.138 [683/745] Linking target lib/librte_bitratestats.so.23.0 00:01:36.138 [684/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:36.138 [685/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:36.138 [686/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:36.138 [687/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:36.138 [688/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:36.138 [689/745] Linking target lib/librte_pdump.so.23.0 00:01:36.397 [690/745] Linking target lib/librte_port.so.23.0 00:01:36.397 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:36.397 [692/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:36.397 [693/745] Linking target lib/librte_table.so.23.0 00:01:36.397 [694/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:36.655 [695/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:36.655 [696/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:36.914 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:36.914 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:37.172 [699/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:37.172 [700/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:37.172 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:37.172 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:37.430 [703/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:37.430 [704/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:37.430 [705/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:37.430 [706/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:37.430 [707/745] Linking static target drivers/librte_net_i40e.a 00:01:37.997 [708/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:37.997 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:37.997 [710/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.255 [711/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:38.255 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:39.189 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:39.447 [714/745] Linking static target lib/librte_node.a 00:01:39.447 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.705 [716/745] Linking target lib/librte_node.so.23.0 00:01:39.705 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:39.963 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:41.338 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:49.471 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.542 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:21.542 [722/745] Linking static target lib/librte_vhost.a 00:02:21.542 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.542 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:36.407 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:36.407 [726/745] Linking static target lib/librte_pipeline.a 00:02:36.974 [727/745] Linking target app/dpdk-dumpcap 00:02:36.974 [728/745] Linking target app/dpdk-pdump 00:02:36.974 [729/745] Linking target app/dpdk-test-security-perf 00:02:36.974 [730/745] Linking target app/dpdk-test-acl 00:02:36.974 [731/745] Linking target app/dpdk-test-fib 00:02:36.974 [732/745] Linking target app/dpdk-proc-info 00:02:36.974 [733/745] Linking target app/dpdk-test-cmdline 00:02:36.974 [734/745] Linking target app/dpdk-test-gpudev 00:02:36.974 [735/745] Linking target app/dpdk-test-sad 00:02:36.974 [736/745] Linking target app/dpdk-test-compress-perf 00:02:36.974 [737/745] Linking target app/dpdk-test-flow-perf 00:02:36.974 [738/745] Linking target app/dpdk-test-pipeline 00:02:36.974 [739/745] Linking target app/dpdk-test-regex 00:02:36.974 [740/745] Linking target app/dpdk-test-eventdev 00:02:36.974 [741/745] Linking target app/dpdk-test-bbdev 00:02:36.974 [742/745] Linking target app/dpdk-test-crypto-perf 00:02:36.974 [743/745] Linking target app/dpdk-testpmd 00:02:38.885 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.885 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:38.885 04:18:58 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:38.885 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:38.885 [0/1] Installing files. 00:02:39.147 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.147 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.151 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:39.152 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:39.152 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.152 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.412 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:39.683 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:39.683 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:39.683 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.683 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:39.683 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.683 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.684 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.685 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.686 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:39.687 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:39.687 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:39.687 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:39.687 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:39.687 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:39.687 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:39.687 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:39.687 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:39.687 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:39.687 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:39.687 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:39.687 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:39.687 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:39.687 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:39.687 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:39.687 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:39.687 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:39.687 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:39.687 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:39.687 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:39.687 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:39.687 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:39.687 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:39.687 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:39.687 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:39.687 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:39.687 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:39.687 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:39.687 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:39.687 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:39.687 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:39.687 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:39.687 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:39.687 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:39.687 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:39.687 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:39.687 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:39.687 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:39.687 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:39.687 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:39.687 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:39.687 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:39.687 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:39.687 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:39.687 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:39.687 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:39.687 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:39.687 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:39.687 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:39.687 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:39.687 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:39.687 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:39.687 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:39.687 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:39.687 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:39.687 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:39.687 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:39.687 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:39.687 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:39.687 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:39.687 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:39.687 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:39.687 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:39.687 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:39.687 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:39.687 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:39.688 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:39.688 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:39.688 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:39.688 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:39.688 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:39.688 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:39.688 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:39.688 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:39.688 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:39.688 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:39.688 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:39.688 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:39.688 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:39.688 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:39.688 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:39.688 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:39.688 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:39.688 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:39.688 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:39.688 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:39.688 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:39.688 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:39.688 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:39.688 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:39.688 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:39.688 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:39.688 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:39.688 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:39.688 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:39.688 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:39.688 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:39.688 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:39.688 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:39.688 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:39.688 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:39.688 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:39.688 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:39.688 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:39.688 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:39.688 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:39.688 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:39.688 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:39.688 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:39.688 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:39.688 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:39.688 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:39.688 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:39.688 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:39.688 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:39.688 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:39.688 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:39.688 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:39.688 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:39.688 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:39.688 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:39.688 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:39.688 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:39.688 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:39.688 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:39.688 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:39.973 04:18:59 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:39.973 04:18:59 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:39.973 04:18:59 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:39.973 04:18:59 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.973 00:02:39.973 real 1m26.447s 00:02:39.973 user 14m29.220s 00:02:39.973 sys 1m48.055s 00:02:39.973 04:18:59 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:39.973 04:18:59 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:39.973 ************************************ 00:02:39.973 END TEST build_native_dpdk 00:02:39.973 ************************************ 00:02:39.973 04:18:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:39.973 04:18:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:39.973 04:18:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:39.973 04:18:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:39.973 04:18:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:39.973 04:18:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:39.973 04:18:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:39.973 04:18:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:39.973 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:39.973 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.973 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.973 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:40.231 Using 'verbs' RDMA provider 00:02:50.764 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:58.871 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:59.127 Creating mk/config.mk...done. 00:02:59.127 Creating mk/cc.flags.mk...done. 00:02:59.127 Type 'make' to build. 00:02:59.127 04:19:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:59.127 04:19:19 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:59.127 04:19:19 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:59.127 04:19:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.127 ************************************ 00:02:59.127 START TEST make 00:02:59.127 ************************************ 00:02:59.127 04:19:19 make -- common/autotest_common.sh@1121 -- $ make -j48 00:02:59.384 make[1]: Nothing to be done for 'all'. 00:03:00.777 The Meson build system 00:03:00.777 Version: 1.3.1 00:03:00.777 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:00.777 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:00.777 Build type: native build 00:03:00.777 Project name: libvfio-user 00:03:00.777 Project version: 0.0.1 00:03:00.777 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:00.777 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:00.777 Host machine cpu family: x86_64 00:03:00.777 Host machine cpu: x86_64 00:03:00.777 Run-time dependency threads found: YES 00:03:00.777 Library dl found: YES 00:03:00.777 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:00.777 Run-time dependency json-c found: YES 0.17 00:03:00.777 Run-time dependency cmocka found: YES 1.1.7 00:03:00.777 Program pytest-3 found: NO 00:03:00.777 Program flake8 found: NO 00:03:00.777 Program misspell-fixer found: NO 00:03:00.777 Program restructuredtext-lint found: NO 00:03:00.777 Program valgrind found: YES (/usr/bin/valgrind) 00:03:00.777 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:00.777 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:00.777 Compiler for C supports arguments -Wwrite-strings: YES 00:03:00.778 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:00.778 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:00.778 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:00.778 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:00.778 Build targets in project: 8 00:03:00.778 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:00.778 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:00.778 00:03:00.778 libvfio-user 0.0.1 00:03:00.778 00:03:00.778 User defined options 00:03:00.778 buildtype : debug 00:03:00.778 default_library: shared 00:03:00.778 libdir : /usr/local/lib 00:03:00.778 00:03:00.778 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.729 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:01.995 [1/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:01.995 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:01.995 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:01.995 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:01.995 [5/37] Compiling C object samples/null.p/null.c.o 00:03:01.995 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:01.995 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:01.995 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:01.995 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:01.995 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:01.995 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:01.995 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:01.995 [13/37] Compiling C object samples/server.p/server.c.o 00:03:01.995 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:01.995 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:01.995 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:01.995 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:01.995 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:01.995 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:01.995 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:01.995 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:01.995 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:01.995 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:01.995 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:01.995 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:02.260 [26/37] Compiling C object samples/client.p/client.c.o 00:03:02.260 [27/37] Linking target samples/client 00:03:02.260 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:02.260 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:02.260 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:02.260 [31/37] Linking target test/unit_tests 00:03:02.521 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:02.521 [33/37] Linking target samples/lspci 00:03:02.521 [34/37] Linking target samples/server 00:03:02.521 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:02.521 [36/37] Linking target samples/null 00:03:02.521 [37/37] Linking target samples/gpio-pci-idio-16 00:03:02.521 INFO: autodetecting backend as ninja 00:03:02.521 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.789 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:03.362 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:03.362 ninja: no work to do. 00:03:15.566 CC lib/ut/ut.o 00:03:15.566 CC lib/log/log.o 00:03:15.566 CC lib/log/log_flags.o 00:03:15.566 CC lib/log/log_deprecated.o 00:03:15.566 CC lib/ut_mock/mock.o 00:03:15.566 LIB libspdk_ut.a 00:03:15.566 LIB libspdk_log.a 00:03:15.566 LIB libspdk_ut_mock.a 00:03:15.566 SO libspdk_ut.so.2.0 00:03:15.566 SO libspdk_log.so.7.0 00:03:15.566 SO libspdk_ut_mock.so.6.0 00:03:15.566 SYMLINK libspdk_ut.so 00:03:15.566 SYMLINK libspdk_ut_mock.so 00:03:15.566 SYMLINK libspdk_log.so 00:03:15.566 CC lib/util/base64.o 00:03:15.566 CC lib/ioat/ioat.o 00:03:15.566 CC lib/dma/dma.o 00:03:15.566 CXX lib/trace_parser/trace.o 00:03:15.566 CC lib/util/bit_array.o 00:03:15.566 CC lib/util/cpuset.o 00:03:15.566 CC lib/util/crc16.o 00:03:15.566 CC lib/util/crc32.o 00:03:15.566 CC lib/util/crc32c.o 00:03:15.566 CC lib/util/crc32_ieee.o 00:03:15.566 CC lib/util/crc64.o 00:03:15.566 CC lib/util/dif.o 00:03:15.566 CC lib/util/fd.o 00:03:15.566 CC lib/util/file.o 00:03:15.566 CC lib/util/hexlify.o 00:03:15.566 CC lib/util/iov.o 00:03:15.566 CC lib/util/math.o 00:03:15.566 CC lib/util/pipe.o 00:03:15.566 CC lib/util/strerror_tls.o 00:03:15.566 CC lib/util/string.o 00:03:15.566 CC lib/util/uuid.o 00:03:15.566 CC lib/util/fd_group.o 00:03:15.566 CC lib/util/xor.o 00:03:15.566 CC lib/util/zipf.o 00:03:15.566 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.566 CC lib/vfio_user/host/vfio_user.o 00:03:15.566 LIB libspdk_dma.a 00:03:15.566 SO libspdk_dma.so.4.0 00:03:15.566 SYMLINK libspdk_dma.so 00:03:15.566 LIB libspdk_ioat.a 00:03:15.824 SO libspdk_ioat.so.7.0 00:03:15.824 LIB libspdk_vfio_user.a 00:03:15.824 SYMLINK libspdk_ioat.so 00:03:15.824 SO libspdk_vfio_user.so.5.0 00:03:15.824 SYMLINK libspdk_vfio_user.so 00:03:15.824 LIB libspdk_util.a 00:03:16.082 SO libspdk_util.so.9.0 00:03:16.082 SYMLINK libspdk_util.so 00:03:16.340 CC lib/conf/conf.o 00:03:16.340 CC lib/idxd/idxd.o 00:03:16.340 CC lib/env_dpdk/env.o 00:03:16.340 CC lib/env_dpdk/memory.o 00:03:16.340 CC lib/idxd/idxd_user.o 00:03:16.340 CC lib/env_dpdk/pci.o 00:03:16.340 CC lib/json/json_parse.o 00:03:16.340 CC lib/idxd/idxd_kernel.o 00:03:16.340 CC lib/env_dpdk/init.o 00:03:16.340 CC lib/json/json_util.o 00:03:16.340 CC lib/vmd/vmd.o 00:03:16.340 CC lib/env_dpdk/threads.o 00:03:16.340 CC lib/json/json_write.o 00:03:16.340 CC lib/env_dpdk/pci_ioat.o 00:03:16.340 CC lib/rdma/common.o 00:03:16.340 CC lib/vmd/led.o 00:03:16.340 CC lib/env_dpdk/pci_virtio.o 00:03:16.340 CC lib/rdma/rdma_verbs.o 00:03:16.340 CC lib/env_dpdk/pci_vmd.o 00:03:16.340 CC lib/env_dpdk/pci_idxd.o 00:03:16.340 CC lib/env_dpdk/pci_event.o 00:03:16.340 CC lib/env_dpdk/sigbus_handler.o 00:03:16.340 CC lib/env_dpdk/pci_dpdk.o 00:03:16.340 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.340 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:16.340 LIB libspdk_trace_parser.a 00:03:16.340 SO libspdk_trace_parser.so.5.0 00:03:16.598 SYMLINK libspdk_trace_parser.so 00:03:16.598 LIB libspdk_conf.a 00:03:16.598 SO libspdk_conf.so.6.0 00:03:16.598 LIB libspdk_json.a 00:03:16.598 LIB libspdk_rdma.a 00:03:16.598 SYMLINK libspdk_conf.so 00:03:16.598 SO libspdk_rdma.so.6.0 00:03:16.598 SO libspdk_json.so.6.0 00:03:16.598 SYMLINK libspdk_rdma.so 00:03:16.598 SYMLINK libspdk_json.so 00:03:16.857 CC lib/jsonrpc/jsonrpc_server.o 00:03:16.857 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:16.857 CC lib/jsonrpc/jsonrpc_client.o 00:03:16.857 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:16.857 LIB libspdk_idxd.a 00:03:16.857 SO libspdk_idxd.so.12.0 00:03:16.857 SYMLINK libspdk_idxd.so 00:03:17.115 LIB libspdk_vmd.a 00:03:17.115 SO libspdk_vmd.so.6.0 00:03:17.115 SYMLINK libspdk_vmd.so 00:03:17.115 LIB libspdk_jsonrpc.a 00:03:17.115 SO libspdk_jsonrpc.so.6.0 00:03:17.372 SYMLINK libspdk_jsonrpc.so 00:03:17.372 CC lib/rpc/rpc.o 00:03:17.629 LIB libspdk_rpc.a 00:03:17.629 SO libspdk_rpc.so.6.0 00:03:17.629 SYMLINK libspdk_rpc.so 00:03:17.887 CC lib/keyring/keyring.o 00:03:17.887 CC lib/keyring/keyring_rpc.o 00:03:17.887 CC lib/notify/notify.o 00:03:17.887 CC lib/trace/trace.o 00:03:17.887 CC lib/notify/notify_rpc.o 00:03:17.887 CC lib/trace/trace_flags.o 00:03:17.887 CC lib/trace/trace_rpc.o 00:03:18.145 LIB libspdk_notify.a 00:03:18.145 SO libspdk_notify.so.6.0 00:03:18.145 LIB libspdk_keyring.a 00:03:18.145 SYMLINK libspdk_notify.so 00:03:18.145 LIB libspdk_trace.a 00:03:18.145 SO libspdk_keyring.so.1.0 00:03:18.145 SO libspdk_trace.so.10.0 00:03:18.145 SYMLINK libspdk_keyring.so 00:03:18.145 SYMLINK libspdk_trace.so 00:03:18.402 LIB libspdk_env_dpdk.a 00:03:18.402 SO libspdk_env_dpdk.so.14.0 00:03:18.402 CC lib/sock/sock.o 00:03:18.402 CC lib/sock/sock_rpc.o 00:03:18.402 CC lib/thread/thread.o 00:03:18.402 CC lib/thread/iobuf.o 00:03:18.402 SYMLINK libspdk_env_dpdk.so 00:03:18.967 LIB libspdk_sock.a 00:03:18.967 SO libspdk_sock.so.9.0 00:03:18.967 SYMLINK libspdk_sock.so 00:03:18.967 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.967 CC lib/nvme/nvme_ctrlr.o 00:03:18.967 CC lib/nvme/nvme_fabric.o 00:03:18.967 CC lib/nvme/nvme_ns_cmd.o 00:03:18.968 CC lib/nvme/nvme_ns.o 00:03:18.968 CC lib/nvme/nvme_pcie_common.o 00:03:18.968 CC lib/nvme/nvme_pcie.o 00:03:18.968 CC lib/nvme/nvme_qpair.o 00:03:18.968 CC lib/nvme/nvme.o 00:03:18.968 CC lib/nvme/nvme_quirks.o 00:03:18.968 CC lib/nvme/nvme_transport.o 00:03:18.968 CC lib/nvme/nvme_discovery.o 00:03:18.968 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:18.968 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:18.968 CC lib/nvme/nvme_tcp.o 00:03:18.968 CC lib/nvme/nvme_opal.o 00:03:18.968 CC lib/nvme/nvme_io_msg.o 00:03:18.968 CC lib/nvme/nvme_poll_group.o 00:03:18.968 CC lib/nvme/nvme_zns.o 00:03:18.968 CC lib/nvme/nvme_stubs.o 00:03:18.968 CC lib/nvme/nvme_auth.o 00:03:18.968 CC lib/nvme/nvme_cuse.o 00:03:18.968 CC lib/nvme/nvme_vfio_user.o 00:03:18.968 CC lib/nvme/nvme_rdma.o 00:03:19.901 LIB libspdk_thread.a 00:03:20.159 SO libspdk_thread.so.10.0 00:03:20.159 SYMLINK libspdk_thread.so 00:03:20.159 CC lib/virtio/virtio.o 00:03:20.159 CC lib/blob/blobstore.o 00:03:20.159 CC lib/accel/accel.o 00:03:20.159 CC lib/vfu_tgt/tgt_endpoint.o 00:03:20.159 CC lib/virtio/virtio_vhost_user.o 00:03:20.159 CC lib/init/json_config.o 00:03:20.159 CC lib/blob/request.o 00:03:20.159 CC lib/vfu_tgt/tgt_rpc.o 00:03:20.159 CC lib/accel/accel_rpc.o 00:03:20.159 CC lib/virtio/virtio_vfio_user.o 00:03:20.159 CC lib/init/subsystem.o 00:03:20.159 CC lib/blob/zeroes.o 00:03:20.159 CC lib/accel/accel_sw.o 00:03:20.159 CC lib/virtio/virtio_pci.o 00:03:20.159 CC lib/init/subsystem_rpc.o 00:03:20.159 CC lib/blob/blob_bs_dev.o 00:03:20.159 CC lib/init/rpc.o 00:03:20.417 LIB libspdk_init.a 00:03:20.675 SO libspdk_init.so.5.0 00:03:20.675 LIB libspdk_vfu_tgt.a 00:03:20.675 LIB libspdk_virtio.a 00:03:20.675 SYMLINK libspdk_init.so 00:03:20.675 SO libspdk_vfu_tgt.so.3.0 00:03:20.675 SO libspdk_virtio.so.7.0 00:03:20.675 SYMLINK libspdk_vfu_tgt.so 00:03:20.675 SYMLINK libspdk_virtio.so 00:03:20.675 CC lib/event/app.o 00:03:20.675 CC lib/event/reactor.o 00:03:20.675 CC lib/event/log_rpc.o 00:03:20.675 CC lib/event/app_rpc.o 00:03:20.675 CC lib/event/scheduler_static.o 00:03:21.241 LIB libspdk_event.a 00:03:21.241 SO libspdk_event.so.13.0 00:03:21.241 SYMLINK libspdk_event.so 00:03:21.241 LIB libspdk_accel.a 00:03:21.241 SO libspdk_accel.so.15.0 00:03:21.499 LIB libspdk_nvme.a 00:03:21.499 SYMLINK libspdk_accel.so 00:03:21.499 SO libspdk_nvme.so.13.0 00:03:21.499 CC lib/bdev/bdev.o 00:03:21.499 CC lib/bdev/bdev_rpc.o 00:03:21.499 CC lib/bdev/bdev_zone.o 00:03:21.499 CC lib/bdev/part.o 00:03:21.499 CC lib/bdev/scsi_nvme.o 00:03:21.757 SYMLINK libspdk_nvme.so 00:03:23.130 LIB libspdk_blob.a 00:03:23.130 SO libspdk_blob.so.11.0 00:03:23.387 SYMLINK libspdk_blob.so 00:03:23.387 CC lib/lvol/lvol.o 00:03:23.387 CC lib/blobfs/blobfs.o 00:03:23.387 CC lib/blobfs/tree.o 00:03:23.951 LIB libspdk_bdev.a 00:03:24.209 SO libspdk_bdev.so.15.0 00:03:24.209 SYMLINK libspdk_bdev.so 00:03:24.209 LIB libspdk_blobfs.a 00:03:24.209 SO libspdk_blobfs.so.10.0 00:03:24.472 SYMLINK libspdk_blobfs.so 00:03:24.472 CC lib/nvmf/ctrlr.o 00:03:24.472 CC lib/scsi/dev.o 00:03:24.472 CC lib/nvmf/ctrlr_discovery.o 00:03:24.472 CC lib/nvmf/ctrlr_bdev.o 00:03:24.472 CC lib/scsi/lun.o 00:03:24.472 CC lib/ftl/ftl_core.o 00:03:24.472 CC lib/nvmf/subsystem.o 00:03:24.472 CC lib/scsi/port.o 00:03:24.472 CC lib/nbd/nbd.o 00:03:24.472 CC lib/nvmf/nvmf.o 00:03:24.472 CC lib/ftl/ftl_init.o 00:03:24.472 CC lib/ublk/ublk.o 00:03:24.472 CC lib/nvmf/nvmf_rpc.o 00:03:24.472 CC lib/scsi/scsi.o 00:03:24.472 CC lib/ftl/ftl_layout.o 00:03:24.472 CC lib/ublk/ublk_rpc.o 00:03:24.472 CC lib/nbd/nbd_rpc.o 00:03:24.472 CC lib/nvmf/transport.o 00:03:24.472 CC lib/ftl/ftl_debug.o 00:03:24.472 CC lib/scsi/scsi_bdev.o 00:03:24.472 CC lib/nvmf/tcp.o 00:03:24.472 CC lib/ftl/ftl_io.o 00:03:24.472 CC lib/nvmf/stubs.o 00:03:24.472 CC lib/scsi/scsi_pr.o 00:03:24.472 CC lib/scsi/scsi_rpc.o 00:03:24.472 CC lib/ftl/ftl_sb.o 00:03:24.472 CC lib/nvmf/mdns_server.o 00:03:24.472 CC lib/ftl/ftl_l2p_flat.o 00:03:24.473 CC lib/ftl/ftl_l2p.o 00:03:24.473 CC lib/nvmf/vfio_user.o 00:03:24.473 CC lib/scsi/task.o 00:03:24.473 CC lib/nvmf/rdma.o 00:03:24.473 CC lib/ftl/ftl_nv_cache.o 00:03:24.473 CC lib/nvmf/auth.o 00:03:24.473 CC lib/ftl/ftl_band.o 00:03:24.473 CC lib/ftl/ftl_band_ops.o 00:03:24.473 CC lib/ftl/ftl_writer.o 00:03:24.473 CC lib/ftl/ftl_rq.o 00:03:24.473 CC lib/ftl/ftl_reloc.o 00:03:24.473 CC lib/ftl/ftl_l2p_cache.o 00:03:24.473 CC lib/ftl/ftl_p2l.o 00:03:24.473 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.473 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.473 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.473 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:24.473 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.473 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.473 LIB libspdk_lvol.a 00:03:24.473 SO libspdk_lvol.so.10.0 00:03:24.733 SYMLINK libspdk_lvol.so 00:03:24.733 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:24.733 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.733 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.733 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.733 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.733 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.733 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.733 CC lib/ftl/utils/ftl_conf.o 00:03:24.733 CC lib/ftl/utils/ftl_md.o 00:03:24.733 CC lib/ftl/utils/ftl_mempool.o 00:03:24.733 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.733 CC lib/ftl/utils/ftl_property.o 00:03:24.994 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.994 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.994 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.994 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.994 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.994 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.994 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.994 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.994 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.994 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.994 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.994 CC lib/ftl/base/ftl_base_dev.o 00:03:24.994 CC lib/ftl/base/ftl_base_bdev.o 00:03:24.994 CC lib/ftl/ftl_trace.o 00:03:25.290 LIB libspdk_nbd.a 00:03:25.290 SO libspdk_nbd.so.7.0 00:03:25.290 LIB libspdk_scsi.a 00:03:25.290 SYMLINK libspdk_nbd.so 00:03:25.290 SO libspdk_scsi.so.9.0 00:03:25.549 LIB libspdk_ublk.a 00:03:25.549 SYMLINK libspdk_scsi.so 00:03:25.549 SO libspdk_ublk.so.3.0 00:03:25.549 SYMLINK libspdk_ublk.so 00:03:25.549 CC lib/iscsi/conn.o 00:03:25.549 CC lib/vhost/vhost.o 00:03:25.549 CC lib/vhost/vhost_rpc.o 00:03:25.549 CC lib/iscsi/init_grp.o 00:03:25.549 CC lib/iscsi/iscsi.o 00:03:25.549 CC lib/vhost/vhost_scsi.o 00:03:25.549 CC lib/vhost/vhost_blk.o 00:03:25.549 CC lib/iscsi/md5.o 00:03:25.549 CC lib/iscsi/param.o 00:03:25.549 CC lib/vhost/rte_vhost_user.o 00:03:25.549 CC lib/iscsi/portal_grp.o 00:03:25.549 CC lib/iscsi/tgt_node.o 00:03:25.549 CC lib/iscsi/iscsi_subsystem.o 00:03:25.549 CC lib/iscsi/task.o 00:03:25.549 CC lib/iscsi/iscsi_rpc.o 00:03:25.808 LIB libspdk_ftl.a 00:03:26.066 SO libspdk_ftl.so.9.0 00:03:26.325 SYMLINK libspdk_ftl.so 00:03:26.892 LIB libspdk_vhost.a 00:03:26.892 SO libspdk_vhost.so.8.0 00:03:26.892 SYMLINK libspdk_vhost.so 00:03:26.892 LIB libspdk_nvmf.a 00:03:26.892 LIB libspdk_iscsi.a 00:03:27.151 SO libspdk_nvmf.so.18.0 00:03:27.151 SO libspdk_iscsi.so.8.0 00:03:27.151 SYMLINK libspdk_iscsi.so 00:03:27.151 SYMLINK libspdk_nvmf.so 00:03:27.410 CC module/vfu_device/vfu_virtio.o 00:03:27.410 CC module/env_dpdk/env_dpdk_rpc.o 00:03:27.410 CC module/vfu_device/vfu_virtio_blk.o 00:03:27.410 CC module/vfu_device/vfu_virtio_scsi.o 00:03:27.410 CC module/vfu_device/vfu_virtio_rpc.o 00:03:27.669 CC module/accel/error/accel_error.o 00:03:27.669 CC module/blob/bdev/blob_bdev.o 00:03:27.669 CC module/keyring/file/keyring.o 00:03:27.669 CC module/accel/error/accel_error_rpc.o 00:03:27.669 CC module/sock/posix/posix.o 00:03:27.669 CC module/keyring/file/keyring_rpc.o 00:03:27.669 CC module/keyring/linux/keyring.o 00:03:27.669 CC module/keyring/linux/keyring_rpc.o 00:03:27.669 CC module/accel/ioat/accel_ioat.o 00:03:27.669 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.669 CC module/accel/dsa/accel_dsa.o 00:03:27.669 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.669 CC module/accel/iaa/accel_iaa.o 00:03:27.669 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:27.669 CC module/accel/dsa/accel_dsa_rpc.o 00:03:27.669 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.669 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:27.669 LIB libspdk_env_dpdk_rpc.a 00:03:27.669 SO libspdk_env_dpdk_rpc.so.6.0 00:03:27.669 SYMLINK libspdk_env_dpdk_rpc.so 00:03:27.669 LIB libspdk_keyring_file.a 00:03:27.669 LIB libspdk_keyring_linux.a 00:03:27.669 LIB libspdk_scheduler_gscheduler.a 00:03:27.669 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.928 SO libspdk_keyring_file.so.1.0 00:03:27.928 SO libspdk_keyring_linux.so.1.0 00:03:27.928 SO libspdk_scheduler_gscheduler.so.4.0 00:03:27.928 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:27.928 LIB libspdk_accel_error.a 00:03:27.928 LIB libspdk_scheduler_dynamic.a 00:03:27.928 LIB libspdk_accel_ioat.a 00:03:27.928 LIB libspdk_accel_iaa.a 00:03:27.928 SO libspdk_accel_error.so.2.0 00:03:27.928 SO libspdk_scheduler_dynamic.so.4.0 00:03:27.928 SO libspdk_accel_ioat.so.6.0 00:03:27.928 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:27.928 SYMLINK libspdk_scheduler_gscheduler.so 00:03:27.928 SYMLINK libspdk_keyring_file.so 00:03:27.928 SYMLINK libspdk_keyring_linux.so 00:03:27.928 SO libspdk_accel_iaa.so.3.0 00:03:27.928 LIB libspdk_accel_dsa.a 00:03:27.928 SYMLINK libspdk_accel_error.so 00:03:27.928 SYMLINK libspdk_scheduler_dynamic.so 00:03:27.928 LIB libspdk_blob_bdev.a 00:03:27.928 SYMLINK libspdk_accel_ioat.so 00:03:27.928 SYMLINK libspdk_accel_iaa.so 00:03:27.928 SO libspdk_accel_dsa.so.5.0 00:03:27.928 SO libspdk_blob_bdev.so.11.0 00:03:27.928 SYMLINK libspdk_blob_bdev.so 00:03:27.928 SYMLINK libspdk_accel_dsa.so 00:03:28.187 LIB libspdk_vfu_device.a 00:03:28.187 SO libspdk_vfu_device.so.3.0 00:03:28.187 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.187 CC module/bdev/nvme/bdev_nvme.o 00:03:28.187 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.187 CC module/bdev/malloc/bdev_malloc.o 00:03:28.187 CC module/bdev/nvme/nvme_rpc.o 00:03:28.187 CC module/bdev/delay/vbdev_delay.o 00:03:28.187 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.187 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.187 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.187 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:28.187 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.187 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.187 CC module/bdev/split/vbdev_split.o 00:03:28.187 CC module/bdev/gpt/gpt.o 00:03:28.187 CC module/bdev/nvme/vbdev_opal.o 00:03:28.187 CC module/bdev/null/bdev_null.o 00:03:28.187 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.187 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.187 CC module/bdev/error/vbdev_error.o 00:03:28.187 CC module/bdev/null/bdev_null_rpc.o 00:03:28.187 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:28.187 CC module/bdev/split/vbdev_split_rpc.o 00:03:28.187 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.187 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:28.187 CC module/bdev/raid/bdev_raid.o 00:03:28.187 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.187 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:28.187 CC module/bdev/aio/bdev_aio.o 00:03:28.187 CC module/bdev/raid/bdev_raid_rpc.o 00:03:28.187 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.187 CC module/bdev/raid/bdev_raid_sb.o 00:03:28.187 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.187 CC module/bdev/ftl/bdev_ftl.o 00:03:28.187 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.187 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.187 CC module/bdev/raid/raid0.o 00:03:28.187 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.187 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.187 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.187 CC module/bdev/raid/raid1.o 00:03:28.187 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.187 CC module/bdev/raid/concat.o 00:03:28.447 SYMLINK libspdk_vfu_device.so 00:03:28.447 LIB libspdk_sock_posix.a 00:03:28.706 SO libspdk_sock_posix.so.6.0 00:03:28.706 LIB libspdk_bdev_split.a 00:03:28.706 LIB libspdk_blobfs_bdev.a 00:03:28.706 SO libspdk_bdev_split.so.6.0 00:03:28.706 LIB libspdk_bdev_error.a 00:03:28.706 SO libspdk_blobfs_bdev.so.6.0 00:03:28.706 SYMLINK libspdk_sock_posix.so 00:03:28.706 LIB libspdk_bdev_null.a 00:03:28.706 SO libspdk_bdev_error.so.6.0 00:03:28.706 SO libspdk_bdev_null.so.6.0 00:03:28.706 LIB libspdk_bdev_gpt.a 00:03:28.706 LIB libspdk_bdev_zone_block.a 00:03:28.706 SYMLINK libspdk_blobfs_bdev.so 00:03:28.706 SYMLINK libspdk_bdev_split.so 00:03:28.706 SYMLINK libspdk_bdev_error.so 00:03:28.706 SO libspdk_bdev_gpt.so.6.0 00:03:28.706 LIB libspdk_bdev_passthru.a 00:03:28.706 SO libspdk_bdev_zone_block.so.6.0 00:03:28.706 SYMLINK libspdk_bdev_null.so 00:03:28.706 SO libspdk_bdev_passthru.so.6.0 00:03:28.706 SYMLINK libspdk_bdev_gpt.so 00:03:28.706 LIB libspdk_bdev_ftl.a 00:03:28.706 SYMLINK libspdk_bdev_zone_block.so 00:03:28.965 LIB libspdk_bdev_malloc.a 00:03:28.965 SO libspdk_bdev_ftl.so.6.0 00:03:28.965 SYMLINK libspdk_bdev_passthru.so 00:03:28.965 LIB libspdk_bdev_iscsi.a 00:03:28.965 SO libspdk_bdev_malloc.so.6.0 00:03:28.965 LIB libspdk_bdev_aio.a 00:03:28.965 SO libspdk_bdev_iscsi.so.6.0 00:03:28.965 SO libspdk_bdev_aio.so.6.0 00:03:28.965 SYMLINK libspdk_bdev_ftl.so 00:03:28.965 LIB libspdk_bdev_delay.a 00:03:28.965 SYMLINK libspdk_bdev_malloc.so 00:03:28.965 SO libspdk_bdev_delay.so.6.0 00:03:28.965 SYMLINK libspdk_bdev_iscsi.so 00:03:28.965 SYMLINK libspdk_bdev_aio.so 00:03:28.965 LIB libspdk_bdev_virtio.a 00:03:28.965 SYMLINK libspdk_bdev_delay.so 00:03:28.965 SO libspdk_bdev_virtio.so.6.0 00:03:28.965 LIB libspdk_bdev_lvol.a 00:03:28.965 SYMLINK libspdk_bdev_virtio.so 00:03:28.965 SO libspdk_bdev_lvol.so.6.0 00:03:29.224 SYMLINK libspdk_bdev_lvol.so 00:03:29.481 LIB libspdk_bdev_raid.a 00:03:29.481 SO libspdk_bdev_raid.so.6.0 00:03:29.737 SYMLINK libspdk_bdev_raid.so 00:03:30.671 LIB libspdk_bdev_nvme.a 00:03:30.671 SO libspdk_bdev_nvme.so.7.0 00:03:30.671 SYMLINK libspdk_bdev_nvme.so 00:03:31.236 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.236 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.236 CC module/event/subsystems/keyring/keyring.o 00:03:31.236 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.236 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:31.236 CC module/event/subsystems/sock/sock.o 00:03:31.236 CC module/event/subsystems/vmd/vmd.o 00:03:31.236 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.236 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.236 LIB libspdk_event_keyring.a 00:03:31.236 LIB libspdk_event_sock.a 00:03:31.236 LIB libspdk_event_vhost_blk.a 00:03:31.236 LIB libspdk_event_scheduler.a 00:03:31.236 LIB libspdk_event_vfu_tgt.a 00:03:31.236 LIB libspdk_event_vmd.a 00:03:31.236 SO libspdk_event_keyring.so.1.0 00:03:31.236 LIB libspdk_event_iobuf.a 00:03:31.236 SO libspdk_event_scheduler.so.4.0 00:03:31.236 SO libspdk_event_sock.so.5.0 00:03:31.236 SO libspdk_event_vhost_blk.so.3.0 00:03:31.236 SO libspdk_event_vfu_tgt.so.3.0 00:03:31.236 SO libspdk_event_vmd.so.6.0 00:03:31.236 SO libspdk_event_iobuf.so.3.0 00:03:31.236 SYMLINK libspdk_event_keyring.so 00:03:31.236 SYMLINK libspdk_event_vhost_blk.so 00:03:31.236 SYMLINK libspdk_event_sock.so 00:03:31.236 SYMLINK libspdk_event_scheduler.so 00:03:31.236 SYMLINK libspdk_event_vfu_tgt.so 00:03:31.236 SYMLINK libspdk_event_vmd.so 00:03:31.493 SYMLINK libspdk_event_iobuf.so 00:03:31.493 CC module/event/subsystems/accel/accel.o 00:03:31.751 LIB libspdk_event_accel.a 00:03:31.751 SO libspdk_event_accel.so.6.0 00:03:31.751 SYMLINK libspdk_event_accel.so 00:03:32.010 CC module/event/subsystems/bdev/bdev.o 00:03:32.268 LIB libspdk_event_bdev.a 00:03:32.268 SO libspdk_event_bdev.so.6.0 00:03:32.268 SYMLINK libspdk_event_bdev.so 00:03:32.268 CC module/event/subsystems/ublk/ublk.o 00:03:32.268 CC module/event/subsystems/scsi/scsi.o 00:03:32.268 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:32.268 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:32.268 CC module/event/subsystems/nbd/nbd.o 00:03:32.526 LIB libspdk_event_ublk.a 00:03:32.526 LIB libspdk_event_nbd.a 00:03:32.526 LIB libspdk_event_scsi.a 00:03:32.526 SO libspdk_event_nbd.so.6.0 00:03:32.526 SO libspdk_event_ublk.so.3.0 00:03:32.526 SO libspdk_event_scsi.so.6.0 00:03:32.526 SYMLINK libspdk_event_ublk.so 00:03:32.526 SYMLINK libspdk_event_nbd.so 00:03:32.526 SYMLINK libspdk_event_scsi.so 00:03:32.526 LIB libspdk_event_nvmf.a 00:03:32.526 SO libspdk_event_nvmf.so.6.0 00:03:32.784 SYMLINK libspdk_event_nvmf.so 00:03:32.784 CC module/event/subsystems/iscsi/iscsi.o 00:03:32.784 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.043 LIB libspdk_event_vhost_scsi.a 00:03:33.043 LIB libspdk_event_iscsi.a 00:03:33.043 SO libspdk_event_vhost_scsi.so.3.0 00:03:33.043 SO libspdk_event_iscsi.so.6.0 00:03:33.043 SYMLINK libspdk_event_vhost_scsi.so 00:03:33.043 SYMLINK libspdk_event_iscsi.so 00:03:33.043 SO libspdk.so.6.0 00:03:33.043 SYMLINK libspdk.so 00:03:33.307 CC app/trace_record/trace_record.o 00:03:33.307 CXX app/trace/trace.o 00:03:33.307 CC app/spdk_top/spdk_top.o 00:03:33.307 CC app/spdk_nvme_discover/discovery_aer.o 00:03:33.307 CC app/spdk_nvme_perf/perf.o 00:03:33.307 CC app/spdk_lspci/spdk_lspci.o 00:03:33.307 CC app/spdk_nvme_identify/identify.o 00:03:33.307 TEST_HEADER include/spdk/accel.h 00:03:33.307 CC test/rpc_client/rpc_client_test.o 00:03:33.307 TEST_HEADER include/spdk/accel_module.h 00:03:33.307 TEST_HEADER include/spdk/assert.h 00:03:33.307 TEST_HEADER include/spdk/barrier.h 00:03:33.307 TEST_HEADER include/spdk/base64.h 00:03:33.307 TEST_HEADER include/spdk/bdev.h 00:03:33.307 TEST_HEADER include/spdk/bdev_module.h 00:03:33.307 TEST_HEADER include/spdk/bdev_zone.h 00:03:33.307 TEST_HEADER include/spdk/bit_array.h 00:03:33.307 TEST_HEADER include/spdk/bit_pool.h 00:03:33.307 TEST_HEADER include/spdk/blob_bdev.h 00:03:33.307 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:33.307 TEST_HEADER include/spdk/blobfs.h 00:03:33.308 TEST_HEADER include/spdk/blob.h 00:03:33.308 TEST_HEADER include/spdk/conf.h 00:03:33.308 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:33.308 CC app/spdk_dd/spdk_dd.o 00:03:33.308 TEST_HEADER include/spdk/config.h 00:03:33.308 TEST_HEADER include/spdk/cpuset.h 00:03:33.308 TEST_HEADER include/spdk/crc16.h 00:03:33.308 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.308 TEST_HEADER include/spdk/crc32.h 00:03:33.308 CC app/nvmf_tgt/nvmf_main.o 00:03:33.308 TEST_HEADER include/spdk/crc64.h 00:03:33.565 CC app/vhost/vhost.o 00:03:33.565 TEST_HEADER include/spdk/dif.h 00:03:33.565 TEST_HEADER include/spdk/dma.h 00:03:33.565 TEST_HEADER include/spdk/endian.h 00:03:33.565 TEST_HEADER include/spdk/env_dpdk.h 00:03:33.565 TEST_HEADER include/spdk/env.h 00:03:33.565 TEST_HEADER include/spdk/event.h 00:03:33.565 TEST_HEADER include/spdk/fd_group.h 00:03:33.565 TEST_HEADER include/spdk/fd.h 00:03:33.565 TEST_HEADER include/spdk/file.h 00:03:33.565 CC app/spdk_tgt/spdk_tgt.o 00:03:33.565 TEST_HEADER include/spdk/ftl.h 00:03:33.565 TEST_HEADER include/spdk/gpt_spec.h 00:03:33.565 TEST_HEADER include/spdk/hexlify.h 00:03:33.565 CC app/fio/nvme/fio_plugin.o 00:03:33.565 TEST_HEADER include/spdk/histogram_data.h 00:03:33.565 CC examples/accel/perf/accel_perf.o 00:03:33.565 CC examples/vmd/led/led.o 00:03:33.565 TEST_HEADER include/spdk/idxd.h 00:03:33.565 CC examples/nvme/hello_world/hello_world.o 00:03:33.565 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:33.565 CC examples/nvme/reconnect/reconnect.o 00:03:33.565 CC examples/ioat/perf/perf.o 00:03:33.565 CC examples/util/zipf/zipf.o 00:03:33.566 CC examples/idxd/perf/perf.o 00:03:33.566 CC examples/sock/hello_world/hello_sock.o 00:03:33.566 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.566 CC examples/nvme/arbitration/arbitration.o 00:03:33.566 TEST_HEADER include/spdk/init.h 00:03:33.566 TEST_HEADER include/spdk/ioat.h 00:03:33.566 CC examples/vmd/lsvmd/lsvmd.o 00:03:33.566 CC examples/nvme/hotplug/hotplug.o 00:03:33.566 CC test/thread/poller_perf/poller_perf.o 00:03:33.566 CC examples/nvme/abort/abort.o 00:03:33.566 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:33.566 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.566 CC examples/ioat/verify/verify.o 00:03:33.566 CC test/event/event_perf/event_perf.o 00:03:33.566 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.566 TEST_HEADER include/spdk/json.h 00:03:33.566 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.566 CC test/nvme/aer/aer.o 00:03:33.566 TEST_HEADER include/spdk/keyring.h 00:03:33.566 TEST_HEADER include/spdk/keyring_module.h 00:03:33.566 TEST_HEADER include/spdk/likely.h 00:03:33.566 TEST_HEADER include/spdk/log.h 00:03:33.566 CC examples/blob/cli/blobcli.o 00:03:33.566 TEST_HEADER include/spdk/lvol.h 00:03:33.566 TEST_HEADER include/spdk/memory.h 00:03:33.566 CC examples/bdev/hello_world/hello_bdev.o 00:03:33.566 TEST_HEADER include/spdk/mmio.h 00:03:33.566 CC test/bdev/bdevio/bdevio.o 00:03:33.566 CC examples/nvmf/nvmf/nvmf.o 00:03:33.566 TEST_HEADER include/spdk/nbd.h 00:03:33.566 CC examples/bdev/bdevperf/bdevperf.o 00:03:33.566 CC app/fio/bdev/fio_plugin.o 00:03:33.566 TEST_HEADER include/spdk/notify.h 00:03:33.566 CC examples/blob/hello_world/hello_blob.o 00:03:33.566 CC test/accel/dif/dif.o 00:03:33.566 TEST_HEADER include/spdk/nvme.h 00:03:33.566 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.566 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.566 CC test/blobfs/mkfs/mkfs.o 00:03:33.566 CC examples/thread/thread/thread_ex.o 00:03:33.566 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.566 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.566 CC test/dma/test_dma/test_dma.o 00:03:33.566 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.566 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.566 CC test/app/bdev_svc/bdev_svc.o 00:03:33.566 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.566 TEST_HEADER include/spdk/nvmf.h 00:03:33.566 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.566 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.566 TEST_HEADER include/spdk/opal.h 00:03:33.566 TEST_HEADER include/spdk/opal_spec.h 00:03:33.566 TEST_HEADER include/spdk/pci_ids.h 00:03:33.566 TEST_HEADER include/spdk/pipe.h 00:03:33.566 TEST_HEADER include/spdk/queue.h 00:03:33.566 TEST_HEADER include/spdk/reduce.h 00:03:33.566 TEST_HEADER include/spdk/rpc.h 00:03:33.566 TEST_HEADER include/spdk/scheduler.h 00:03:33.566 TEST_HEADER include/spdk/scsi.h 00:03:33.566 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.566 TEST_HEADER include/spdk/sock.h 00:03:33.566 TEST_HEADER include/spdk/stdinc.h 00:03:33.566 LINK spdk_lspci 00:03:33.566 TEST_HEADER include/spdk/string.h 00:03:33.566 TEST_HEADER include/spdk/thread.h 00:03:33.566 TEST_HEADER include/spdk/trace.h 00:03:33.566 TEST_HEADER include/spdk/trace_parser.h 00:03:33.566 TEST_HEADER include/spdk/tree.h 00:03:33.566 CC test/lvol/esnap/esnap.o 00:03:33.566 TEST_HEADER include/spdk/ublk.h 00:03:33.566 TEST_HEADER include/spdk/util.h 00:03:33.566 TEST_HEADER include/spdk/uuid.h 00:03:33.566 TEST_HEADER include/spdk/version.h 00:03:33.566 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.829 CC test/env/mem_callbacks/mem_callbacks.o 00:03:33.829 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.829 TEST_HEADER include/spdk/vhost.h 00:03:33.829 TEST_HEADER include/spdk/vmd.h 00:03:33.829 TEST_HEADER include/spdk/xor.h 00:03:33.829 TEST_HEADER include/spdk/zipf.h 00:03:33.829 CXX test/cpp_headers/accel.o 00:03:33.829 LINK rpc_client_test 00:03:33.829 LINK spdk_nvme_discover 00:03:33.829 LINK interrupt_tgt 00:03:33.829 LINK lsvmd 00:03:33.829 LINK poller_perf 00:03:33.829 LINK led 00:03:33.829 LINK nvmf_tgt 00:03:33.829 LINK zipf 00:03:33.829 LINK vhost 00:03:33.829 LINK event_perf 00:03:33.829 LINK spdk_trace_record 00:03:33.829 LINK iscsi_tgt 00:03:33.829 LINK cmb_copy 00:03:33.829 LINK spdk_tgt 00:03:34.094 LINK hello_world 00:03:34.094 LINK verify 00:03:34.094 LINK ioat_perf 00:03:34.094 LINK bdev_svc 00:03:34.094 CXX test/cpp_headers/accel_module.o 00:03:34.094 LINK mkfs 00:03:34.094 LINK hotplug 00:03:34.094 LINK hello_sock 00:03:34.094 LINK hello_blob 00:03:34.094 LINK hello_bdev 00:03:34.094 LINK mem_callbacks 00:03:34.094 CXX test/cpp_headers/assert.o 00:03:34.094 LINK aer 00:03:34.094 LINK spdk_dd 00:03:34.094 LINK thread 00:03:34.094 CXX test/cpp_headers/barrier.o 00:03:34.094 LINK idxd_perf 00:03:34.094 CXX test/cpp_headers/base64.o 00:03:34.094 LINK reconnect 00:03:34.094 LINK arbitration 00:03:34.094 CXX test/cpp_headers/bdev.o 00:03:34.094 LINK nvmf 00:03:34.094 LINK spdk_trace 00:03:34.357 LINK abort 00:03:34.357 CC test/event/reactor/reactor.o 00:03:34.357 CC test/event/reactor_perf/reactor_perf.o 00:03:34.357 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:34.357 CXX test/cpp_headers/bdev_module.o 00:03:34.357 CXX test/cpp_headers/bdev_zone.o 00:03:34.357 LINK bdevio 00:03:34.357 CC test/env/vtophys/vtophys.o 00:03:34.357 CC test/event/app_repeat/app_repeat.o 00:03:34.357 LINK test_dma 00:03:34.357 CXX test/cpp_headers/bit_array.o 00:03:34.357 LINK accel_perf 00:03:34.357 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.357 CC test/env/memory/memory_ut.o 00:03:34.357 CC test/event/scheduler/scheduler.o 00:03:34.357 LINK dif 00:03:34.357 CC test/env/pci/pci_ut.o 00:03:34.357 LINK nvme_manage 00:03:34.357 CXX test/cpp_headers/bit_pool.o 00:03:34.357 CC test/app/histogram_perf/histogram_perf.o 00:03:34.357 CXX test/cpp_headers/blob_bdev.o 00:03:34.357 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.357 CXX test/cpp_headers/blobfs.o 00:03:34.621 CC test/nvme/reset/reset.o 00:03:34.621 CC test/app/jsoncat/jsoncat.o 00:03:34.621 CXX test/cpp_headers/blob.o 00:03:34.621 CC test/app/stub/stub.o 00:03:34.621 CC test/nvme/sgl/sgl.o 00:03:34.621 LINK reactor 00:03:34.621 LINK spdk_bdev 00:03:34.621 LINK blobcli 00:03:34.621 CXX test/cpp_headers/conf.o 00:03:34.621 LINK reactor_perf 00:03:34.621 CXX test/cpp_headers/config.o 00:03:34.621 LINK spdk_nvme 00:03:34.621 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:34.621 CC test/nvme/e2edp/nvme_dp.o 00:03:34.621 CXX test/cpp_headers/cpuset.o 00:03:34.621 CXX test/cpp_headers/crc16.o 00:03:34.621 CC test/nvme/overhead/overhead.o 00:03:34.621 LINK vtophys 00:03:34.621 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.621 CC test/nvme/startup/startup.o 00:03:34.621 CC test/nvme/err_injection/err_injection.o 00:03:34.621 LINK pmr_persistence 00:03:34.621 LINK app_repeat 00:03:34.621 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:34.621 CXX test/cpp_headers/crc32.o 00:03:34.621 CXX test/cpp_headers/crc64.o 00:03:34.621 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.621 CXX test/cpp_headers/dif.o 00:03:34.881 CC test/nvme/reserve/reserve.o 00:03:34.881 LINK env_dpdk_post_init 00:03:34.881 CC test/nvme/connect_stress/connect_stress.o 00:03:34.881 LINK histogram_perf 00:03:34.881 CC test/nvme/simple_copy/simple_copy.o 00:03:34.881 CXX test/cpp_headers/dma.o 00:03:34.881 LINK jsoncat 00:03:34.881 CXX test/cpp_headers/endian.o 00:03:34.881 CXX test/cpp_headers/env_dpdk.o 00:03:34.881 CXX test/cpp_headers/env.o 00:03:34.881 CXX test/cpp_headers/event.o 00:03:34.881 LINK scheduler 00:03:34.881 CC test/nvme/boot_partition/boot_partition.o 00:03:34.881 LINK stub 00:03:34.881 CXX test/cpp_headers/fd_group.o 00:03:34.881 CC test/nvme/compliance/nvme_compliance.o 00:03:34.881 CXX test/cpp_headers/fd.o 00:03:34.881 CXX test/cpp_headers/file.o 00:03:34.881 CC test/nvme/fused_ordering/fused_ordering.o 00:03:34.881 CC test/nvme/fdp/fdp.o 00:03:34.881 CXX test/cpp_headers/ftl.o 00:03:34.881 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:34.881 CXX test/cpp_headers/gpt_spec.o 00:03:34.881 CXX test/cpp_headers/hexlify.o 00:03:34.882 CXX test/cpp_headers/histogram_data.o 00:03:34.882 LINK spdk_nvme_identify 00:03:34.882 LINK spdk_nvme_perf 00:03:34.882 CXX test/cpp_headers/idxd.o 00:03:34.882 CC test/nvme/cuse/cuse.o 00:03:34.882 CXX test/cpp_headers/idxd_spec.o 00:03:34.882 CXX test/cpp_headers/init.o 00:03:35.143 LINK startup 00:03:35.143 LINK reset 00:03:35.143 LINK bdevperf 00:03:35.143 LINK sgl 00:03:35.143 LINK err_injection 00:03:35.143 CXX test/cpp_headers/ioat.o 00:03:35.143 CXX test/cpp_headers/ioat_spec.o 00:03:35.143 CXX test/cpp_headers/iscsi_spec.o 00:03:35.143 LINK spdk_top 00:03:35.143 CXX test/cpp_headers/json.o 00:03:35.143 CXX test/cpp_headers/jsonrpc.o 00:03:35.143 LINK nvme_dp 00:03:35.143 CXX test/cpp_headers/keyring.o 00:03:35.143 LINK connect_stress 00:03:35.143 LINK reserve 00:03:35.143 LINK overhead 00:03:35.143 LINK pci_ut 00:03:35.143 CXX test/cpp_headers/keyring_module.o 00:03:35.143 CXX test/cpp_headers/likely.o 00:03:35.143 CXX test/cpp_headers/log.o 00:03:35.143 CXX test/cpp_headers/lvol.o 00:03:35.143 CXX test/cpp_headers/memory.o 00:03:35.405 CXX test/cpp_headers/mmio.o 00:03:35.405 CXX test/cpp_headers/nbd.o 00:03:35.405 LINK boot_partition 00:03:35.405 CXX test/cpp_headers/notify.o 00:03:35.405 CXX test/cpp_headers/nvme.o 00:03:35.405 CXX test/cpp_headers/nvme_intel.o 00:03:35.405 CXX test/cpp_headers/nvme_ocssd.o 00:03:35.405 LINK simple_copy 00:03:35.405 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:35.405 CXX test/cpp_headers/nvme_spec.o 00:03:35.405 CXX test/cpp_headers/nvme_zns.o 00:03:35.405 CXX test/cpp_headers/nvmf_cmd.o 00:03:35.405 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:35.405 CXX test/cpp_headers/nvmf.o 00:03:35.405 LINK fused_ordering 00:03:35.405 CXX test/cpp_headers/nvmf_spec.o 00:03:35.405 CXX test/cpp_headers/nvmf_transport.o 00:03:35.405 LINK doorbell_aers 00:03:35.405 LINK nvme_fuzz 00:03:35.405 CXX test/cpp_headers/opal.o 00:03:35.405 CXX test/cpp_headers/opal_spec.o 00:03:35.405 CXX test/cpp_headers/pci_ids.o 00:03:35.405 CXX test/cpp_headers/pipe.o 00:03:35.405 CXX test/cpp_headers/queue.o 00:03:35.405 CXX test/cpp_headers/reduce.o 00:03:35.405 CXX test/cpp_headers/rpc.o 00:03:35.405 CXX test/cpp_headers/scheduler.o 00:03:35.405 CXX test/cpp_headers/scsi.o 00:03:35.405 CXX test/cpp_headers/scsi_spec.o 00:03:35.405 CXX test/cpp_headers/sock.o 00:03:35.405 CXX test/cpp_headers/stdinc.o 00:03:35.405 CXX test/cpp_headers/string.o 00:03:35.405 CXX test/cpp_headers/thread.o 00:03:35.405 CXX test/cpp_headers/trace.o 00:03:35.405 CXX test/cpp_headers/trace_parser.o 00:03:35.405 CXX test/cpp_headers/tree.o 00:03:35.405 CXX test/cpp_headers/ublk.o 00:03:35.664 LINK vhost_fuzz 00:03:35.664 LINK nvme_compliance 00:03:35.664 CXX test/cpp_headers/util.o 00:03:35.664 LINK fdp 00:03:35.664 CXX test/cpp_headers/uuid.o 00:03:35.664 CXX test/cpp_headers/version.o 00:03:35.664 CXX test/cpp_headers/vfio_user_pci.o 00:03:35.664 CXX test/cpp_headers/vfio_user_spec.o 00:03:35.664 CXX test/cpp_headers/vhost.o 00:03:35.664 CXX test/cpp_headers/vmd.o 00:03:35.664 CXX test/cpp_headers/xor.o 00:03:35.664 CXX test/cpp_headers/zipf.o 00:03:35.922 LINK memory_ut 00:03:36.857 LINK cuse 00:03:36.857 LINK iscsi_fuzz 00:03:39.396 LINK esnap 00:03:39.961 00:03:39.961 real 0m40.738s 00:03:39.961 user 7m33.292s 00:03:39.961 sys 1m51.229s 00:03:39.961 04:19:59 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:39.961 04:19:59 make -- common/autotest_common.sh@10 -- $ set +x 00:03:39.961 ************************************ 00:03:39.961 END TEST make 00:03:39.961 ************************************ 00:03:39.961 04:19:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:39.961 04:19:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:39.961 04:19:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:39.961 04:19:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.961 04:19:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:39.961 04:19:59 -- pm/common@44 -- $ pid=2551571 00:03:39.961 04:19:59 -- pm/common@50 -- $ kill -TERM 2551571 00:03:39.961 04:19:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.961 04:19:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:39.961 04:19:59 -- pm/common@44 -- $ pid=2551573 00:03:39.961 04:19:59 -- pm/common@50 -- $ kill -TERM 2551573 00:03:39.961 04:19:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.961 04:19:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:39.961 04:19:59 -- pm/common@44 -- $ pid=2551575 00:03:39.961 04:19:59 -- pm/common@50 -- $ kill -TERM 2551575 00:03:39.961 04:19:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.961 04:19:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:39.961 04:19:59 -- pm/common@44 -- $ pid=2551604 00:03:39.961 04:19:59 -- pm/common@50 -- $ sudo -E kill -TERM 2551604 00:03:39.961 04:20:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:39.961 04:20:00 -- nvmf/common.sh@7 -- # uname -s 00:03:39.961 04:20:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.961 04:20:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.961 04:20:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.961 04:20:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.961 04:20:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.961 04:20:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.961 04:20:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.961 04:20:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.961 04:20:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.961 04:20:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.961 04:20:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:39.961 04:20:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:39.961 04:20:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.961 04:20:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.961 04:20:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:39.961 04:20:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:39.961 04:20:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:39.961 04:20:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.961 04:20:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.961 04:20:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.961 04:20:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.961 04:20:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.961 04:20:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.961 04:20:00 -- paths/export.sh@5 -- # export PATH 00:03:39.961 04:20:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.961 04:20:00 -- nvmf/common.sh@47 -- # : 0 00:03:39.961 04:20:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:39.961 04:20:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:39.961 04:20:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:39.961 04:20:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.961 04:20:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.961 04:20:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:39.961 04:20:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:39.961 04:20:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:39.961 04:20:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:39.961 04:20:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:39.961 04:20:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:39.961 04:20:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:39.961 04:20:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:39.961 04:20:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:39.962 04:20:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:39.962 04:20:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:39.962 04:20:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:39.962 04:20:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:39.962 04:20:00 -- spdk/autotest.sh@48 -- # udevadm_pid=2627706 00:03:39.962 04:20:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:39.962 04:20:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:39.962 04:20:00 -- pm/common@17 -- # local monitor 00:03:39.962 04:20:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.962 04:20:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.962 04:20:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.962 04:20:00 -- pm/common@21 -- # date +%s 00:03:39.962 04:20:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.962 04:20:00 -- pm/common@21 -- # date +%s 00:03:39.962 04:20:00 -- pm/common@25 -- # sleep 1 00:03:39.962 04:20:00 -- pm/common@21 -- # date +%s 00:03:39.962 04:20:00 -- pm/common@21 -- # date +%s 00:03:39.962 04:20:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720923600 00:03:39.962 04:20:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720923600 00:03:39.962 04:20:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720923600 00:03:39.962 04:20:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720923600 00:03:39.962 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720923600_collect-vmstat.pm.log 00:03:39.962 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720923600_collect-cpu-load.pm.log 00:03:39.962 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720923600_collect-cpu-temp.pm.log 00:03:39.962 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720923600_collect-bmc-pm.bmc.pm.log 00:03:40.893 04:20:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:40.893 04:20:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:40.893 04:20:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:40.893 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:03:40.893 04:20:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:40.893 04:20:01 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:40.893 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:03:41.151 04:20:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:41.151 04:20:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:41.151 04:20:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:41.151 04:20:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:41.151 04:20:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:41.151 04:20:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.151 04:20:01 -- common/autotest_common.sh@1451 -- # uname 00:03:41.151 04:20:01 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:41.151 04:20:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.151 04:20:01 -- common/autotest_common.sh@1471 -- # uname 00:03:41.151 04:20:01 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:41.151 04:20:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:41.151 04:20:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:41.151 04:20:01 -- spdk/autotest.sh@72 -- # hash lcov 00:03:41.151 04:20:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:41.151 04:20:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:41.151 --rc lcov_branch_coverage=1 00:03:41.151 --rc lcov_function_coverage=1 00:03:41.151 --rc genhtml_branch_coverage=1 00:03:41.151 --rc genhtml_function_coverage=1 00:03:41.151 --rc genhtml_legend=1 00:03:41.151 --rc geninfo_all_blocks=1 00:03:41.151 ' 00:03:41.151 04:20:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:41.151 --rc lcov_branch_coverage=1 00:03:41.151 --rc lcov_function_coverage=1 00:03:41.151 --rc genhtml_branch_coverage=1 00:03:41.151 --rc genhtml_function_coverage=1 00:03:41.151 --rc genhtml_legend=1 00:03:41.151 --rc geninfo_all_blocks=1 00:03:41.151 ' 00:03:41.151 04:20:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:41.151 --rc lcov_branch_coverage=1 00:03:41.151 --rc lcov_function_coverage=1 00:03:41.151 --rc genhtml_branch_coverage=1 00:03:41.151 --rc genhtml_function_coverage=1 00:03:41.151 --rc genhtml_legend=1 00:03:41.151 --rc geninfo_all_blocks=1 00:03:41.151 --no-external' 00:03:41.151 04:20:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:41.151 --rc lcov_branch_coverage=1 00:03:41.151 --rc lcov_function_coverage=1 00:03:41.151 --rc genhtml_branch_coverage=1 00:03:41.151 --rc genhtml_function_coverage=1 00:03:41.152 --rc genhtml_legend=1 00:03:41.152 --rc geninfo_all_blocks=1 00:03:41.152 --no-external' 00:03:41.152 04:20:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:41.152 lcov: LCOV version 1.14 00:03:41.152 04:20:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:56.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:56.027 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:10.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:10.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:10.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:10.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:10.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:10.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:10.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:10.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:10.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:10.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:10.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:10.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:10.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:14.200 04:20:33 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:14.200 04:20:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:14.200 04:20:33 -- common/autotest_common.sh@10 -- # set +x 00:04:14.200 04:20:33 -- spdk/autotest.sh@91 -- # rm -f 00:04:14.200 04:20:33 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.154 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:15.154 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:15.154 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:15.154 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:15.154 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:15.154 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:15.154 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:15.154 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:15.154 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:15.154 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:15.154 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:15.154 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:15.154 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:15.154 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:15.154 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:15.154 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:15.154 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:15.413 04:20:35 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:15.413 04:20:35 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:15.413 04:20:35 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:15.413 04:20:35 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:15.413 04:20:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:15.413 04:20:35 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:15.413 04:20:35 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:15.413 04:20:35 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.413 04:20:35 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:15.413 04:20:35 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:15.413 04:20:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.413 04:20:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:15.413 04:20:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:15.413 04:20:35 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:15.413 04:20:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:15.413 No valid GPT data, bailing 00:04:15.413 04:20:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.413 04:20:35 -- scripts/common.sh@391 -- # pt= 00:04:15.413 04:20:35 -- scripts/common.sh@392 -- # return 1 00:04:15.413 04:20:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:15.413 1+0 records in 00:04:15.413 1+0 records out 00:04:15.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0025788 s, 407 MB/s 00:04:15.413 04:20:35 -- spdk/autotest.sh@118 -- # sync 00:04:15.413 04:20:35 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:15.413 04:20:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:15.413 04:20:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:17.313 04:20:37 -- spdk/autotest.sh@124 -- # uname -s 00:04:17.313 04:20:37 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:17.313 04:20:37 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:17.313 04:20:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.313 04:20:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.313 04:20:37 -- common/autotest_common.sh@10 -- # set +x 00:04:17.313 ************************************ 00:04:17.313 START TEST setup.sh 00:04:17.313 ************************************ 00:04:17.313 04:20:37 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:17.313 * Looking for test storage... 00:04:17.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:17.313 04:20:37 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:17.313 04:20:37 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:17.313 04:20:37 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:17.313 04:20:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.313 04:20:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.313 04:20:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.313 ************************************ 00:04:17.313 START TEST acl 00:04:17.313 ************************************ 00:04:17.313 04:20:37 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:17.313 * Looking for test storage... 00:04:17.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:17.313 04:20:37 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:17.313 04:20:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:17.313 04:20:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:17.313 04:20:37 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:17.313 04:20:37 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:17.313 04:20:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:17.313 04:20:37 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:17.313 04:20:37 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.313 04:20:37 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:17.313 04:20:37 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:17.313 04:20:37 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:17.313 04:20:37 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:17.313 04:20:37 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:17.313 04:20:37 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:17.313 04:20:37 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.313 04:20:37 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.684 04:20:38 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:18.684 04:20:38 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:18.684 04:20:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.684 04:20:38 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:18.684 04:20:38 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.684 04:20:38 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:19.628 Hugepages 00:04:19.628 node hugesize free / total 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.628 00:04:19.628 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.628 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:19.885 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:19.886 04:20:39 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:19.886 04:20:39 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:19.886 04:20:39 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:19.886 04:20:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:19.886 ************************************ 00:04:19.886 START TEST denied 00:04:19.886 ************************************ 00:04:19.886 04:20:39 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:19.886 04:20:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:19.886 04:20:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:19.886 04:20:39 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:19.886 04:20:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.886 04:20:39 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.260 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.260 04:20:41 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.789 00:04:23.789 real 0m3.773s 00:04:23.789 user 0m1.039s 00:04:23.789 sys 0m1.813s 00:04:23.789 04:20:43 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.789 04:20:43 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:23.789 ************************************ 00:04:23.789 END TEST denied 00:04:23.789 ************************************ 00:04:23.789 04:20:43 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:23.789 04:20:43 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.789 04:20:43 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.789 04:20:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:23.789 ************************************ 00:04:23.789 START TEST allowed 00:04:23.789 ************************************ 00:04:23.789 04:20:43 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:23.789 04:20:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:23.789 04:20:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:23.789 04:20:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.789 04:20:43 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:23.789 04:20:43 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.318 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.318 04:20:46 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:26.318 04:20:46 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:26.318 04:20:46 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:26.318 04:20:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.318 04:20:46 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.694 00:04:27.694 real 0m3.765s 00:04:27.694 user 0m0.946s 00:04:27.694 sys 0m1.638s 00:04:27.694 04:20:47 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.694 04:20:47 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:27.694 ************************************ 00:04:27.694 END TEST allowed 00:04:27.694 ************************************ 00:04:27.694 00:04:27.694 real 0m10.257s 00:04:27.694 user 0m3.047s 00:04:27.694 sys 0m5.162s 00:04:27.694 04:20:47 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.694 04:20:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.694 ************************************ 00:04:27.694 END TEST acl 00:04:27.694 ************************************ 00:04:27.694 04:20:47 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:27.694 04:20:47 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.694 04:20:47 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.694 04:20:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.694 ************************************ 00:04:27.694 START TEST hugepages 00:04:27.694 ************************************ 00:04:27.694 04:20:47 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:27.694 * Looking for test storage... 00:04:27.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41687584 kB' 'MemAvailable: 45197088 kB' 'Buffers: 2704 kB' 'Cached: 12256884 kB' 'SwapCached: 0 kB' 'Active: 9268996 kB' 'Inactive: 3506552 kB' 'Active(anon): 8874644 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519068 kB' 'Mapped: 204228 kB' 'Shmem: 8358684 kB' 'KReclaimable: 204444 kB' 'Slab: 580880 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376436 kB' 'KernelStack: 12880 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 10004092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.694 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.695 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:27.696 04:20:47 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:27.696 04:20:47 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.696 04:20:47 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.696 04:20:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.696 ************************************ 00:04:27.696 START TEST default_setup 00:04:27.696 ************************************ 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.696 04:20:47 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.070 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:29.070 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:29.070 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:29.070 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:29.070 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:29.070 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:29.070 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:29.070 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:29.070 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:29.070 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:29.070 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:29.070 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:29.070 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:29.070 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:29.070 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:29.070 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:30.010 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43810216 kB' 'MemAvailable: 47319720 kB' 'Buffers: 2704 kB' 'Cached: 12256984 kB' 'SwapCached: 0 kB' 'Active: 9287508 kB' 'Inactive: 3506552 kB' 'Active(anon): 8893156 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537704 kB' 'Mapped: 204280 kB' 'Shmem: 8358784 kB' 'KReclaimable: 204444 kB' 'Slab: 580936 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376492 kB' 'KernelStack: 12848 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.010 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43811952 kB' 'MemAvailable: 47321456 kB' 'Buffers: 2704 kB' 'Cached: 12256984 kB' 'SwapCached: 0 kB' 'Active: 9287508 kB' 'Inactive: 3506552 kB' 'Active(anon): 8893156 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537680 kB' 'Mapped: 204256 kB' 'Shmem: 8358784 kB' 'KReclaimable: 204444 kB' 'Slab: 580932 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376488 kB' 'KernelStack: 12864 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:30.011 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.012 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43812608 kB' 'MemAvailable: 47322112 kB' 'Buffers: 2704 kB' 'Cached: 12257004 kB' 'SwapCached: 0 kB' 'Active: 9286784 kB' 'Inactive: 3506552 kB' 'Active(anon): 8892432 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536908 kB' 'Mapped: 204256 kB' 'Shmem: 8358804 kB' 'KReclaimable: 204444 kB' 'Slab: 581016 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376572 kB' 'KernelStack: 12832 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.013 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.014 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.015 nr_hugepages=1024 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.015 resv_hugepages=0 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.015 surplus_hugepages=0 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.015 anon_hugepages=0 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43811852 kB' 'MemAvailable: 47321356 kB' 'Buffers: 2704 kB' 'Cached: 12257020 kB' 'SwapCached: 0 kB' 'Active: 9286800 kB' 'Inactive: 3506552 kB' 'Active(anon): 8892448 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536876 kB' 'Mapped: 204256 kB' 'Shmem: 8358820 kB' 'KReclaimable: 204444 kB' 'Slab: 581008 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376564 kB' 'KernelStack: 12816 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.015 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.016 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26499136 kB' 'MemUsed: 6330748 kB' 'SwapCached: 0 kB' 'Active: 3168292 kB' 'Inactive: 108416 kB' 'Active(anon): 3057404 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3039304 kB' 'Mapped: 33788 kB' 'AnonPages: 240576 kB' 'Shmem: 2820000 kB' 'KernelStack: 7928 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102808 kB' 'Slab: 323000 kB' 'SReclaimable: 102808 kB' 'SUnreclaim: 220192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.017 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.018 node0=1024 expecting 1024 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.018 00:04:30.018 real 0m2.438s 00:04:30.018 user 0m0.641s 00:04:30.018 sys 0m0.872s 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:30.018 04:20:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:30.018 ************************************ 00:04:30.018 END TEST default_setup 00:04:30.018 ************************************ 00:04:30.018 04:20:50 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:30.018 04:20:50 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:30.018 04:20:50 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:30.018 04:20:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.018 ************************************ 00:04:30.018 START TEST per_node_1G_alloc 00:04:30.018 ************************************ 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:30.018 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:30.019 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.019 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:30.019 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:30.019 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.311 04:20:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.246 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:31.246 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.246 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:31.246 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:31.246 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:31.246 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:31.246 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:31.246 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:31.246 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:31.246 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:31.246 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:31.246 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:31.246 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:31.246 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:31.246 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:31.246 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:31.246 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.511 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43805388 kB' 'MemAvailable: 47314892 kB' 'Buffers: 2704 kB' 'Cached: 12257100 kB' 'SwapCached: 0 kB' 'Active: 9287672 kB' 'Inactive: 3506552 kB' 'Active(anon): 8893320 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537656 kB' 'Mapped: 204288 kB' 'Shmem: 8358900 kB' 'KReclaimable: 204444 kB' 'Slab: 580932 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376488 kB' 'KernelStack: 12864 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10025084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.512 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.513 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43806148 kB' 'MemAvailable: 47315652 kB' 'Buffers: 2704 kB' 'Cached: 12257104 kB' 'SwapCached: 0 kB' 'Active: 9287384 kB' 'Inactive: 3506552 kB' 'Active(anon): 8893032 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537396 kB' 'Mapped: 204284 kB' 'Shmem: 8358904 kB' 'KReclaimable: 204444 kB' 'Slab: 580932 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376488 kB' 'KernelStack: 12912 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10025100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.514 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.515 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43807264 kB' 'MemAvailable: 47316768 kB' 'Buffers: 2704 kB' 'Cached: 12257124 kB' 'SwapCached: 0 kB' 'Active: 9287404 kB' 'Inactive: 3506552 kB' 'Active(anon): 8893052 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537344 kB' 'Mapped: 204284 kB' 'Shmem: 8358924 kB' 'KReclaimable: 204444 kB' 'Slab: 580972 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376528 kB' 'KernelStack: 12896 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10025124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.516 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.517 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.518 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.519 nr_hugepages=1024 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.519 resv_hugepages=0 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.519 surplus_hugepages=0 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.519 anon_hugepages=0 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43807264 kB' 'MemAvailable: 47316768 kB' 'Buffers: 2704 kB' 'Cached: 12257124 kB' 'SwapCached: 0 kB' 'Active: 9287124 kB' 'Inactive: 3506552 kB' 'Active(anon): 8892772 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537064 kB' 'Mapped: 204284 kB' 'Shmem: 8358924 kB' 'KReclaimable: 204444 kB' 'Slab: 580972 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376528 kB' 'KernelStack: 12896 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10025148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.519 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.520 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27554188 kB' 'MemUsed: 5275696 kB' 'SwapCached: 0 kB' 'Active: 3168588 kB' 'Inactive: 108416 kB' 'Active(anon): 3057700 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3039416 kB' 'Mapped: 33808 kB' 'AnonPages: 240748 kB' 'Shmem: 2820112 kB' 'KernelStack: 7976 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102808 kB' 'Slab: 323140 kB' 'SReclaimable: 102808 kB' 'SUnreclaim: 220332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.521 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.522 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16252824 kB' 'MemUsed: 11459000 kB' 'SwapCached: 0 kB' 'Active: 6118860 kB' 'Inactive: 3398136 kB' 'Active(anon): 5835396 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9220456 kB' 'Mapped: 170476 kB' 'AnonPages: 296588 kB' 'Shmem: 5538856 kB' 'KernelStack: 4920 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101636 kB' 'Slab: 257832 kB' 'SReclaimable: 101636 kB' 'SUnreclaim: 156196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.523 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.524 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:31.525 node0=512 expecting 512 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:31.525 node1=512 expecting 512 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:31.525 00:04:31.525 real 0m1.400s 00:04:31.525 user 0m0.601s 00:04:31.525 sys 0m0.758s 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.525 04:20:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.525 ************************************ 00:04:31.525 END TEST per_node_1G_alloc 00:04:31.525 ************************************ 00:04:31.525 04:20:51 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:31.525 04:20:51 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.525 04:20:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.525 04:20:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.525 ************************************ 00:04:31.525 START TEST even_2G_alloc 00:04:31.525 ************************************ 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.525 04:20:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.908 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:32.908 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.908 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:32.908 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:32.908 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:32.908 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:32.908 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:32.908 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:32.908 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:32.908 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:32.908 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:32.908 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:32.908 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:32.908 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:32.908 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:32.908 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:32.908 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43784084 kB' 'MemAvailable: 47293588 kB' 'Buffers: 2704 kB' 'Cached: 12257244 kB' 'SwapCached: 0 kB' 'Active: 9289844 kB' 'Inactive: 3506552 kB' 'Active(anon): 8895492 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539712 kB' 'Mapped: 205212 kB' 'Shmem: 8359044 kB' 'KReclaimable: 204444 kB' 'Slab: 580784 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376340 kB' 'KernelStack: 12880 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10028208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.908 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.909 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43787404 kB' 'MemAvailable: 47296908 kB' 'Buffers: 2704 kB' 'Cached: 12257244 kB' 'SwapCached: 0 kB' 'Active: 9293028 kB' 'Inactive: 3506552 kB' 'Active(anon): 8898676 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542888 kB' 'Mapped: 204732 kB' 'Shmem: 8359044 kB' 'KReclaimable: 204444 kB' 'Slab: 580784 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376340 kB' 'KernelStack: 12928 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10031536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.910 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43788064 kB' 'MemAvailable: 47297568 kB' 'Buffers: 2704 kB' 'Cached: 12257248 kB' 'SwapCached: 0 kB' 'Active: 9293592 kB' 'Inactive: 3506552 kB' 'Active(anon): 8899240 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543452 kB' 'Mapped: 205212 kB' 'Shmem: 8359048 kB' 'KReclaimable: 204444 kB' 'Slab: 580772 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376328 kB' 'KernelStack: 12928 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10031560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:32.911 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.912 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.913 nr_hugepages=1024 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.913 resv_hugepages=0 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.913 surplus_hugepages=0 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.913 anon_hugepages=0 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.913 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43788064 kB' 'MemAvailable: 47297568 kB' 'Buffers: 2704 kB' 'Cached: 12257284 kB' 'SwapCached: 0 kB' 'Active: 9287540 kB' 'Inactive: 3506552 kB' 'Active(anon): 8893188 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537356 kB' 'Mapped: 204776 kB' 'Shmem: 8359084 kB' 'KReclaimable: 204444 kB' 'Slab: 580772 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376328 kB' 'KernelStack: 12896 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10025460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.914 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27546432 kB' 'MemUsed: 5283452 kB' 'SwapCached: 0 kB' 'Active: 3168664 kB' 'Inactive: 108416 kB' 'Active(anon): 3057776 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3039480 kB' 'Mapped: 33820 kB' 'AnonPages: 240816 kB' 'Shmem: 2820176 kB' 'KernelStack: 7976 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102808 kB' 'Slab: 323068 kB' 'SReclaimable: 102808 kB' 'SUnreclaim: 220260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.915 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.916 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.917 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:32.917 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:32.917 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.917 04:20:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16242088 kB' 'MemUsed: 11469736 kB' 'SwapCached: 0 kB' 'Active: 6118976 kB' 'Inactive: 3398136 kB' 'Active(anon): 5835512 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9220552 kB' 'Mapped: 170476 kB' 'AnonPages: 296656 kB' 'Shmem: 5538952 kB' 'KernelStack: 4952 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101636 kB' 'Slab: 257704 kB' 'SReclaimable: 101636 kB' 'SUnreclaim: 156068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.917 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:32.918 node0=512 expecting 512 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:32.918 node1=512 expecting 512 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:32.918 00:04:32.918 real 0m1.387s 00:04:32.918 user 0m0.581s 00:04:32.918 sys 0m0.766s 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.918 04:20:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.918 ************************************ 00:04:32.918 END TEST even_2G_alloc 00:04:32.918 ************************************ 00:04:32.918 04:20:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:32.918 04:20:53 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.918 04:20:53 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.918 04:20:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.918 ************************************ 00:04:32.918 START TEST odd_alloc 00:04:32.918 ************************************ 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.918 04:20:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:34.298 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:34.298 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:34.298 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:34.298 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:34.298 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:34.298 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:34.298 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:34.298 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:34.298 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:34.298 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:34.298 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:34.298 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:34.298 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:34.298 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:34.298 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:34.298 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:34.298 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43778920 kB' 'MemAvailable: 47288424 kB' 'Buffers: 2704 kB' 'Cached: 12257368 kB' 'SwapCached: 0 kB' 'Active: 9284360 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890008 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533964 kB' 'Mapped: 203520 kB' 'Shmem: 8359168 kB' 'KReclaimable: 204444 kB' 'Slab: 580876 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376432 kB' 'KernelStack: 12832 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10011576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.298 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.299 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43785664 kB' 'MemAvailable: 47295168 kB' 'Buffers: 2704 kB' 'Cached: 12257372 kB' 'SwapCached: 0 kB' 'Active: 9284488 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890136 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534116 kB' 'Mapped: 203512 kB' 'Shmem: 8359172 kB' 'KReclaimable: 204444 kB' 'Slab: 580876 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376432 kB' 'KernelStack: 12880 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10011592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.300 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43786036 kB' 'MemAvailable: 47295540 kB' 'Buffers: 2704 kB' 'Cached: 12257392 kB' 'SwapCached: 0 kB' 'Active: 9284588 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890236 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534168 kB' 'Mapped: 203436 kB' 'Shmem: 8359192 kB' 'KReclaimable: 204444 kB' 'Slab: 580860 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376416 kB' 'KernelStack: 12896 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10011612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.301 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.302 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:34.303 nr_hugepages=1025 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.303 resv_hugepages=0 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.303 surplus_hugepages=0 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.303 anon_hugepages=0 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43785784 kB' 'MemAvailable: 47295288 kB' 'Buffers: 2704 kB' 'Cached: 12257412 kB' 'SwapCached: 0 kB' 'Active: 9285032 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890680 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534688 kB' 'Mapped: 203436 kB' 'Shmem: 8359212 kB' 'KReclaimable: 204444 kB' 'Slab: 580860 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376416 kB' 'KernelStack: 12912 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10012632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.303 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.304 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27543608 kB' 'MemUsed: 5286276 kB' 'SwapCached: 0 kB' 'Active: 3167744 kB' 'Inactive: 108416 kB' 'Active(anon): 3056856 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3039556 kB' 'Mapped: 33100 kB' 'AnonPages: 239716 kB' 'Shmem: 2820252 kB' 'KernelStack: 7944 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102808 kB' 'Slab: 323092 kB' 'SReclaimable: 102808 kB' 'SUnreclaim: 220284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.305 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16242076 kB' 'MemUsed: 11469748 kB' 'SwapCached: 0 kB' 'Active: 6118620 kB' 'Inactive: 3398136 kB' 'Active(anon): 5835156 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9220580 kB' 'Mapped: 170336 kB' 'AnonPages: 296216 kB' 'Shmem: 5538980 kB' 'KernelStack: 5432 kB' 'PageTables: 5388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101636 kB' 'Slab: 257768 kB' 'SReclaimable: 101636 kB' 'SUnreclaim: 156132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.306 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.307 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.307 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.307 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.307 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.307 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.307 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.307 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.307 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.307 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.308 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:34.309 node0=512 expecting 513 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:34.309 node1=513 expecting 512 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:34.309 00:04:34.309 real 0m1.400s 00:04:34.309 user 0m0.558s 00:04:34.309 sys 0m0.801s 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.309 04:20:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.309 ************************************ 00:04:34.309 END TEST odd_alloc 00:04:34.309 ************************************ 00:04:34.568 04:20:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:34.568 04:20:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.568 04:20:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.568 04:20:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.568 ************************************ 00:04:34.568 START TEST custom_alloc 00:04:34.568 ************************************ 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:34.568 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.569 04:20:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.503 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.503 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.503 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.503 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.503 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.503 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.503 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.503 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.503 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.503 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.503 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.503 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.503 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.503 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.503 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.503 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.503 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42732056 kB' 'MemAvailable: 46241560 kB' 'Buffers: 2704 kB' 'Cached: 12257504 kB' 'SwapCached: 0 kB' 'Active: 9285056 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890704 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534692 kB' 'Mapped: 203532 kB' 'Shmem: 8359304 kB' 'KReclaimable: 204444 kB' 'Slab: 580724 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376280 kB' 'KernelStack: 12880 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10012004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.784 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.785 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42732396 kB' 'MemAvailable: 46241900 kB' 'Buffers: 2704 kB' 'Cached: 12257504 kB' 'SwapCached: 0 kB' 'Active: 9284660 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890308 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534268 kB' 'Mapped: 203528 kB' 'Shmem: 8359304 kB' 'KReclaimable: 204444 kB' 'Slab: 580724 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376280 kB' 'KernelStack: 12880 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10012020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.786 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42731660 kB' 'MemAvailable: 46241164 kB' 'Buffers: 2704 kB' 'Cached: 12257524 kB' 'SwapCached: 0 kB' 'Active: 9284712 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890360 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534264 kB' 'Mapped: 203452 kB' 'Shmem: 8359324 kB' 'KReclaimable: 204444 kB' 'Slab: 580708 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376264 kB' 'KernelStack: 12880 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10012040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.787 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.788 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:35.789 nr_hugepages=1536 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.789 resv_hugepages=0 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.789 surplus_hugepages=0 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.789 anon_hugepages=0 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42731660 kB' 'MemAvailable: 46241164 kB' 'Buffers: 2704 kB' 'Cached: 12257548 kB' 'SwapCached: 0 kB' 'Active: 9284772 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890420 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534344 kB' 'Mapped: 203452 kB' 'Shmem: 8359348 kB' 'KReclaimable: 204444 kB' 'Slab: 580708 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376264 kB' 'KernelStack: 12912 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10012064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.789 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.790 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27527600 kB' 'MemUsed: 5302284 kB' 'SwapCached: 0 kB' 'Active: 3167412 kB' 'Inactive: 108416 kB' 'Active(anon): 3056524 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3039676 kB' 'Mapped: 33112 kB' 'AnonPages: 239320 kB' 'Shmem: 2820372 kB' 'KernelStack: 7960 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102808 kB' 'Slab: 323000 kB' 'SReclaimable: 102808 kB' 'SUnreclaim: 220192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.791 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15204472 kB' 'MemUsed: 12507352 kB' 'SwapCached: 0 kB' 'Active: 6117360 kB' 'Inactive: 3398136 kB' 'Active(anon): 5833896 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9220596 kB' 'Mapped: 170340 kB' 'AnonPages: 295032 kB' 'Shmem: 5538996 kB' 'KernelStack: 4952 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101636 kB' 'Slab: 257708 kB' 'SReclaimable: 101636 kB' 'SUnreclaim: 156072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.792 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.793 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:35.794 node0=512 expecting 512 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:35.794 node1=1024 expecting 1024 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:35.794 00:04:35.794 real 0m1.441s 00:04:35.794 user 0m0.588s 00:04:35.794 sys 0m0.814s 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.794 04:20:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.794 ************************************ 00:04:35.794 END TEST custom_alloc 00:04:35.794 ************************************ 00:04:36.052 04:20:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:36.052 04:20:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.052 04:20:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.052 04:20:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.052 ************************************ 00:04:36.052 START TEST no_shrink_alloc 00:04:36.052 ************************************ 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.052 04:20:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.987 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:36.987 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:36.987 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:36.987 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:36.987 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:36.987 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:36.987 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:36.987 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:36.987 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:36.987 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:36.987 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:36.987 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:36.987 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:36.987 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:36.987 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:36.987 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:36.987 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43762660 kB' 'MemAvailable: 47272164 kB' 'Buffers: 2704 kB' 'Cached: 12257632 kB' 'SwapCached: 0 kB' 'Active: 9285160 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890808 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534532 kB' 'Mapped: 203480 kB' 'Shmem: 8359432 kB' 'KReclaimable: 204444 kB' 'Slab: 580704 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376260 kB' 'KernelStack: 12864 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10012128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43762660 kB' 'MemAvailable: 47272164 kB' 'Buffers: 2704 kB' 'Cached: 12257636 kB' 'SwapCached: 0 kB' 'Active: 9285156 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890804 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534580 kB' 'Mapped: 203540 kB' 'Shmem: 8359436 kB' 'KReclaimable: 204444 kB' 'Slab: 580752 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376308 kB' 'KernelStack: 12896 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10012148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43763012 kB' 'MemAvailable: 47272516 kB' 'Buffers: 2704 kB' 'Cached: 12257636 kB' 'SwapCached: 0 kB' 'Active: 9284760 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890408 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534168 kB' 'Mapped: 203464 kB' 'Shmem: 8359436 kB' 'KReclaimable: 204444 kB' 'Slab: 580744 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376300 kB' 'KernelStack: 12912 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10012168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.253 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.254 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.255 nr_hugepages=1024 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.255 resv_hugepages=0 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.255 surplus_hugepages=0 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.255 anon_hugepages=0 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43763012 kB' 'MemAvailable: 47272516 kB' 'Buffers: 2704 kB' 'Cached: 12257676 kB' 'SwapCached: 0 kB' 'Active: 9285148 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890796 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534460 kB' 'Mapped: 203464 kB' 'Shmem: 8359476 kB' 'KReclaimable: 204444 kB' 'Slab: 580744 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376300 kB' 'KernelStack: 12912 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10012192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.255 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.256 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26473892 kB' 'MemUsed: 6355992 kB' 'SwapCached: 0 kB' 'Active: 3167940 kB' 'Inactive: 108416 kB' 'Active(anon): 3057052 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3039728 kB' 'Mapped: 33560 kB' 'AnonPages: 239740 kB' 'Shmem: 2820424 kB' 'KernelStack: 7976 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102808 kB' 'Slab: 322916 kB' 'SReclaimable: 102808 kB' 'SUnreclaim: 220108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.257 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.258 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:37.259 node0=1024 expecting 1024 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.259 04:20:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.637 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.637 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:38.637 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.637 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.637 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.637 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.637 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.637 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.637 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.637 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.637 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.637 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.637 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.637 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.637 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.637 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.637 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.637 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.637 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43764312 kB' 'MemAvailable: 47273816 kB' 'Buffers: 2704 kB' 'Cached: 12257740 kB' 'SwapCached: 0 kB' 'Active: 9284968 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890616 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534412 kB' 'Mapped: 203588 kB' 'Shmem: 8359540 kB' 'KReclaimable: 204444 kB' 'Slab: 580656 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376212 kB' 'KernelStack: 12928 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10012368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.638 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43764788 kB' 'MemAvailable: 47274292 kB' 'Buffers: 2704 kB' 'Cached: 12257744 kB' 'SwapCached: 0 kB' 'Active: 9285576 kB' 'Inactive: 3506552 kB' 'Active(anon): 8891224 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535064 kB' 'Mapped: 203548 kB' 'Shmem: 8359544 kB' 'KReclaimable: 204444 kB' 'Slab: 580672 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376228 kB' 'KernelStack: 12944 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10012388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.639 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.640 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43764788 kB' 'MemAvailable: 47274292 kB' 'Buffers: 2704 kB' 'Cached: 12257760 kB' 'SwapCached: 0 kB' 'Active: 9285184 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890832 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534544 kB' 'Mapped: 203468 kB' 'Shmem: 8359560 kB' 'KReclaimable: 204444 kB' 'Slab: 580696 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376252 kB' 'KernelStack: 12960 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10012408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.641 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.642 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.904 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.905 nr_hugepages=1024 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.905 resv_hugepages=0 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.905 surplus_hugepages=0 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.905 anon_hugepages=0 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43764984 kB' 'MemAvailable: 47274488 kB' 'Buffers: 2704 kB' 'Cached: 12257784 kB' 'SwapCached: 0 kB' 'Active: 9285224 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890872 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534552 kB' 'Mapped: 203468 kB' 'Shmem: 8359584 kB' 'KReclaimable: 204444 kB' 'Slab: 580696 kB' 'SReclaimable: 204444 kB' 'SUnreclaim: 376252 kB' 'KernelStack: 12960 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10012432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.905 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.906 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26482244 kB' 'MemUsed: 6347640 kB' 'SwapCached: 0 kB' 'Active: 3168148 kB' 'Inactive: 108416 kB' 'Active(anon): 3057260 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3039732 kB' 'Mapped: 33128 kB' 'AnonPages: 239640 kB' 'Shmem: 2820428 kB' 'KernelStack: 7992 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102808 kB' 'Slab: 322908 kB' 'SReclaimable: 102808 kB' 'SUnreclaim: 220100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.907 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.908 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.909 node0=1024 expecting 1024 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.909 00:04:38.909 real 0m2.885s 00:04:38.909 user 0m1.175s 00:04:38.909 sys 0m1.630s 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.909 04:20:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:38.909 ************************************ 00:04:38.909 END TEST no_shrink_alloc 00:04:38.909 ************************************ 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:38.909 04:20:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:38.909 00:04:38.909 real 0m11.328s 00:04:38.909 user 0m4.321s 00:04:38.909 sys 0m5.864s 00:04:38.909 04:20:58 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.909 04:20:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.909 ************************************ 00:04:38.909 END TEST hugepages 00:04:38.909 ************************************ 00:04:38.909 04:20:58 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:38.909 04:20:58 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.909 04:20:58 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.909 04:20:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.909 ************************************ 00:04:38.909 START TEST driver 00:04:38.909 ************************************ 00:04:38.909 04:20:58 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:38.909 * Looking for test storage... 00:04:38.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:38.909 04:20:59 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:38.909 04:20:59 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.909 04:20:59 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.443 04:21:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:41.443 04:21:01 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.443 04:21:01 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.443 04:21:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:41.443 ************************************ 00:04:41.443 START TEST guess_driver 00:04:41.443 ************************************ 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:41.443 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:41.443 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:41.443 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:41.443 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:41.443 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:41.443 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:41.443 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:41.443 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:41.444 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:41.444 Looking for driver=vfio-pci 00:04:41.444 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.444 04:21:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:41.444 04:21:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.444 04:21:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.407 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.665 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.665 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.665 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.665 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.665 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.665 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.665 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.665 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.665 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.666 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.666 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.666 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.666 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.666 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.666 04:21:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.598 04:21:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.598 04:21:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.598 04:21:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.598 04:21:03 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:43.598 04:21:03 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:43.598 04:21:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.598 04:21:03 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.126 00:04:46.126 real 0m4.748s 00:04:46.126 user 0m1.025s 00:04:46.126 sys 0m1.821s 00:04:46.126 04:21:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.126 04:21:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:46.126 ************************************ 00:04:46.126 END TEST guess_driver 00:04:46.126 ************************************ 00:04:46.126 00:04:46.126 real 0m7.211s 00:04:46.126 user 0m1.591s 00:04:46.126 sys 0m2.779s 00:04:46.126 04:21:06 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.126 04:21:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:46.126 ************************************ 00:04:46.126 END TEST driver 00:04:46.126 ************************************ 00:04:46.126 04:21:06 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:46.126 04:21:06 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.126 04:21:06 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.126 04:21:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.126 ************************************ 00:04:46.126 START TEST devices 00:04:46.126 ************************************ 00:04:46.127 04:21:06 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:46.127 * Looking for test storage... 00:04:46.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:46.127 04:21:06 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:46.127 04:21:06 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:46.127 04:21:06 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.127 04:21:06 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:47.501 04:21:07 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:47.501 04:21:07 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:47.501 04:21:07 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:47.501 04:21:07 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:47.501 04:21:07 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:47.501 04:21:07 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:47.501 04:21:07 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.501 04:21:07 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:47.501 04:21:07 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:47.501 04:21:07 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:47.501 04:21:07 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:47.501 No valid GPT data, bailing 00:04:47.501 04:21:07 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.760 04:21:07 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:47.760 04:21:07 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:47.760 04:21:07 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:47.760 04:21:07 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:47.760 04:21:07 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:47.760 04:21:07 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:47.760 04:21:07 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:47.760 04:21:07 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:47.760 04:21:07 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:47.760 04:21:07 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:47.760 04:21:07 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:47.760 04:21:07 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:47.760 04:21:07 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.760 04:21:07 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.760 04:21:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.760 ************************************ 00:04:47.760 START TEST nvme_mount 00:04:47.760 ************************************ 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:47.760 04:21:07 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:48.694 Creating new GPT entries in memory. 00:04:48.694 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.694 other utilities. 00:04:48.694 04:21:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.694 04:21:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.694 04:21:08 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.694 04:21:08 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.694 04:21:08 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:49.628 Creating new GPT entries in memory. 00:04:49.628 The operation has completed successfully. 00:04:49.628 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.628 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.628 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2648283 00:04:49.628 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.628 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:49.628 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.628 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:49.628 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:49.628 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.885 04:21:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.817 04:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:51.075 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.075 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.333 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:51.333 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:51.333 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.333 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.333 04:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.266 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.525 04:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.899 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.899 00:04:53.899 real 0m6.184s 00:04:53.899 user 0m1.475s 00:04:53.899 sys 0m2.285s 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.899 04:21:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:53.899 ************************************ 00:04:53.899 END TEST nvme_mount 00:04:53.899 ************************************ 00:04:53.899 04:21:13 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:53.899 04:21:13 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.899 04:21:13 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.899 04:21:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.899 ************************************ 00:04:53.899 START TEST dm_mount 00:04:53.899 ************************************ 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:53.899 04:21:13 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:54.831 Creating new GPT entries in memory. 00:04:54.831 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:54.831 other utilities. 00:04:54.831 04:21:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:54.831 04:21:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.831 04:21:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:54.831 04:21:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:54.831 04:21:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:56.204 Creating new GPT entries in memory. 00:04:56.204 The operation has completed successfully. 00:04:56.204 04:21:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:56.204 04:21:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.204 04:21:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.204 04:21:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.204 04:21:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:57.139 The operation has completed successfully. 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2650795 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.139 04:21:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.116 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:58.117 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.376 04:21:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.310 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.311 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:59.570 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:59.570 00:04:59.570 real 0m5.668s 00:04:59.570 user 0m0.920s 00:04:59.570 sys 0m1.576s 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.570 04:21:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:59.570 ************************************ 00:04:59.570 END TEST dm_mount 00:04:59.570 ************************************ 00:04:59.570 04:21:19 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:59.570 04:21:19 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:59.570 04:21:19 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.570 04:21:19 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.570 04:21:19 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:59.570 04:21:19 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.570 04:21:19 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.830 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:59.830 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:59.830 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.830 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.830 04:21:19 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:59.830 04:21:19 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:59.830 04:21:19 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:59.830 04:21:19 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.830 04:21:19 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:59.830 04:21:19 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.830 04:21:19 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:59.830 00:04:59.830 real 0m13.697s 00:04:59.830 user 0m3.005s 00:04:59.830 sys 0m4.856s 00:04:59.830 04:21:19 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.830 04:21:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:59.830 ************************************ 00:04:59.830 END TEST devices 00:04:59.830 ************************************ 00:04:59.830 00:04:59.830 real 0m42.734s 00:04:59.830 user 0m12.065s 00:04:59.830 sys 0m18.816s 00:04:59.830 04:21:19 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.830 04:21:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.830 ************************************ 00:04:59.830 END TEST setup.sh 00:04:59.830 ************************************ 00:04:59.830 04:21:19 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:01.207 Hugepages 00:05:01.207 node hugesize free / total 00:05:01.207 node0 1048576kB 0 / 0 00:05:01.207 node0 2048kB 2048 / 2048 00:05:01.207 node1 1048576kB 0 / 0 00:05:01.207 node1 2048kB 0 / 0 00:05:01.207 00:05:01.207 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.207 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:01.207 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:01.207 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:01.207 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:01.207 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:01.207 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:01.207 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:01.207 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:01.207 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:01.207 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:01.207 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:01.207 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:01.207 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:01.207 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:01.207 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:01.207 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:01.207 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:01.207 04:21:21 -- spdk/autotest.sh@130 -- # uname -s 00:05:01.207 04:21:21 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:01.207 04:21:21 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:01.207 04:21:21 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.143 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:02.143 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:02.143 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:02.143 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:02.143 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:02.143 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:02.143 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:02.143 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:02.143 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:02.143 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:02.402 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:02.402 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:02.402 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:02.402 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:02.402 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:02.402 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:03.337 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:03.337 04:21:23 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:04.273 04:21:24 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:04.273 04:21:24 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:04.273 04:21:24 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.273 04:21:24 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:04.273 04:21:24 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:04.273 04:21:24 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:04.273 04:21:24 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.273 04:21:24 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:04.273 04:21:24 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:04.273 04:21:24 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:04.273 04:21:24 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:04.273 04:21:24 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.650 Waiting for block devices as requested 00:05:05.650 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:05.650 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:05.650 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:05.909 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:05.909 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:05.909 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:05.909 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:05.909 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:06.169 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:06.169 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:06.169 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:06.435 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:06.435 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:06.435 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:06.435 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:06.694 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:06.694 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:06.694 04:21:26 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:06.694 04:21:26 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:06.694 04:21:26 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:06.694 04:21:26 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:06.694 04:21:26 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:06.694 04:21:26 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:06.694 04:21:26 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:06.694 04:21:26 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:06.694 04:21:26 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:06.694 04:21:26 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:06.694 04:21:26 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:06.694 04:21:26 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:06.694 04:21:26 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:06.694 04:21:26 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:06.694 04:21:26 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:06.694 04:21:26 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:06.694 04:21:26 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:06.694 04:21:26 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:06.694 04:21:26 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:06.694 04:21:26 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:06.694 04:21:26 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:06.694 04:21:26 -- common/autotest_common.sh@1553 -- # continue 00:05:06.694 04:21:26 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:06.694 04:21:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.694 04:21:26 -- common/autotest_common.sh@10 -- # set +x 00:05:06.694 04:21:26 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:06.694 04:21:26 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:06.694 04:21:26 -- common/autotest_common.sh@10 -- # set +x 00:05:06.694 04:21:26 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.069 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:08.069 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:08.069 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:08.069 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:08.069 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:08.069 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:08.069 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:08.069 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:08.069 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:08.069 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:08.069 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:08.069 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:08.069 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:08.069 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:08.069 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:08.069 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:09.008 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:09.267 04:21:29 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:09.267 04:21:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.267 04:21:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.267 04:21:29 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:09.267 04:21:29 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:09.267 04:21:29 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:09.267 04:21:29 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:09.267 04:21:29 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:09.267 04:21:29 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:09.267 04:21:29 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:09.267 04:21:29 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:09.267 04:21:29 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.267 04:21:29 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.267 04:21:29 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:09.267 04:21:29 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:09.267 04:21:29 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:09.267 04:21:29 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:09.267 04:21:29 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:09.267 04:21:29 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:09.267 04:21:29 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:09.267 04:21:29 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:09.267 04:21:29 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:09.267 04:21:29 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:09.267 04:21:29 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=2655968 00:05:09.267 04:21:29 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.267 04:21:29 -- common/autotest_common.sh@1594 -- # waitforlisten 2655968 00:05:09.267 04:21:29 -- common/autotest_common.sh@827 -- # '[' -z 2655968 ']' 00:05:09.267 04:21:29 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.267 04:21:29 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:09.267 04:21:29 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.267 04:21:29 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:09.267 04:21:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.267 [2024-07-14 04:21:29.373197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:09.267 [2024-07-14 04:21:29.373290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655968 ] 00:05:09.267 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.267 [2024-07-14 04:21:29.436520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.525 [2024-07-14 04:21:29.526720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.783 04:21:29 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:09.783 04:21:29 -- common/autotest_common.sh@860 -- # return 0 00:05:09.783 04:21:29 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:09.783 04:21:29 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:09.783 04:21:29 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:13.067 nvme0n1 00:05:13.067 04:21:32 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:13.067 [2024-07-14 04:21:33.081836] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:13.067 [2024-07-14 04:21:33.081893] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:13.067 request: 00:05:13.067 { 00:05:13.067 "nvme_ctrlr_name": "nvme0", 00:05:13.067 "password": "test", 00:05:13.067 "method": "bdev_nvme_opal_revert", 00:05:13.067 "req_id": 1 00:05:13.067 } 00:05:13.067 Got JSON-RPC error response 00:05:13.067 response: 00:05:13.067 { 00:05:13.067 "code": -32603, 00:05:13.067 "message": "Internal error" 00:05:13.067 } 00:05:13.067 04:21:33 -- common/autotest_common.sh@1600 -- # true 00:05:13.067 04:21:33 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:13.067 04:21:33 -- common/autotest_common.sh@1604 -- # killprocess 2655968 00:05:13.067 04:21:33 -- common/autotest_common.sh@946 -- # '[' -z 2655968 ']' 00:05:13.067 04:21:33 -- common/autotest_common.sh@950 -- # kill -0 2655968 00:05:13.067 04:21:33 -- common/autotest_common.sh@951 -- # uname 00:05:13.067 04:21:33 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:13.067 04:21:33 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2655968 00:05:13.067 04:21:33 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:13.067 04:21:33 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:13.067 04:21:33 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2655968' 00:05:13.067 killing process with pid 2655968 00:05:13.067 04:21:33 -- common/autotest_common.sh@965 -- # kill 2655968 00:05:13.067 04:21:33 -- common/autotest_common.sh@970 -- # wait 2655968 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.067 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.068 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.326 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:13.327 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:15.226 04:21:34 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:15.226 04:21:34 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:15.226 04:21:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:15.226 04:21:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:15.226 04:21:34 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:15.226 04:21:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:15.226 04:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:15.226 04:21:34 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:15.226 04:21:34 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:15.226 04:21:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.226 04:21:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.226 04:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:15.226 ************************************ 00:05:15.226 START TEST env 00:05:15.226 ************************************ 00:05:15.226 04:21:34 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:15.226 * Looking for test storage... 00:05:15.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:15.226 04:21:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:15.226 04:21:35 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.226 04:21:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.226 04:21:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.226 ************************************ 00:05:15.226 START TEST env_memory 00:05:15.226 ************************************ 00:05:15.226 04:21:35 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:15.226 00:05:15.226 00:05:15.226 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.226 http://cunit.sourceforge.net/ 00:05:15.226 00:05:15.226 00:05:15.226 Suite: memory 00:05:15.226 Test: alloc and free memory map ...[2024-07-14 04:21:35.083426] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:15.226 passed 00:05:15.226 Test: mem map translation ...[2024-07-14 04:21:35.104593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:15.226 [2024-07-14 04:21:35.104615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:15.226 [2024-07-14 04:21:35.104674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:15.226 [2024-07-14 04:21:35.104686] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:15.226 passed 00:05:15.226 Test: mem map registration ...[2024-07-14 04:21:35.147204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:15.226 [2024-07-14 04:21:35.147239] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:15.226 passed 00:05:15.226 Test: mem map adjacent registrations ...passed 00:05:15.226 00:05:15.226 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.226 suites 1 1 n/a 0 0 00:05:15.226 tests 4 4 4 0 0 00:05:15.226 asserts 152 152 152 0 n/a 00:05:15.226 00:05:15.226 Elapsed time = 0.144 seconds 00:05:15.226 00:05:15.226 real 0m0.152s 00:05:15.226 user 0m0.142s 00:05:15.226 sys 0m0.009s 00:05:15.226 04:21:35 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.226 04:21:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:15.226 ************************************ 00:05:15.226 END TEST env_memory 00:05:15.226 ************************************ 00:05:15.226 04:21:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:15.226 04:21:35 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.226 04:21:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.226 04:21:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.226 ************************************ 00:05:15.226 START TEST env_vtophys 00:05:15.226 ************************************ 00:05:15.226 04:21:35 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:15.226 EAL: lib.eal log level changed from notice to debug 00:05:15.226 EAL: Detected lcore 0 as core 0 on socket 0 00:05:15.226 EAL: Detected lcore 1 as core 1 on socket 0 00:05:15.226 EAL: Detected lcore 2 as core 2 on socket 0 00:05:15.226 EAL: Detected lcore 3 as core 3 on socket 0 00:05:15.226 EAL: Detected lcore 4 as core 4 on socket 0 00:05:15.226 EAL: Detected lcore 5 as core 5 on socket 0 00:05:15.226 EAL: Detected lcore 6 as core 8 on socket 0 00:05:15.226 EAL: Detected lcore 7 as core 9 on socket 0 00:05:15.226 EAL: Detected lcore 8 as core 10 on socket 0 00:05:15.226 EAL: Detected lcore 9 as core 11 on socket 0 00:05:15.226 EAL: Detected lcore 10 as core 12 on socket 0 00:05:15.226 EAL: Detected lcore 11 as core 13 on socket 0 00:05:15.226 EAL: Detected lcore 12 as core 0 on socket 1 00:05:15.226 EAL: Detected lcore 13 as core 1 on socket 1 00:05:15.226 EAL: Detected lcore 14 as core 2 on socket 1 00:05:15.226 EAL: Detected lcore 15 as core 3 on socket 1 00:05:15.226 EAL: Detected lcore 16 as core 4 on socket 1 00:05:15.226 EAL: Detected lcore 17 as core 5 on socket 1 00:05:15.226 EAL: Detected lcore 18 as core 8 on socket 1 00:05:15.226 EAL: Detected lcore 19 as core 9 on socket 1 00:05:15.226 EAL: Detected lcore 20 as core 10 on socket 1 00:05:15.226 EAL: Detected lcore 21 as core 11 on socket 1 00:05:15.226 EAL: Detected lcore 22 as core 12 on socket 1 00:05:15.226 EAL: Detected lcore 23 as core 13 on socket 1 00:05:15.226 EAL: Detected lcore 24 as core 0 on socket 0 00:05:15.226 EAL: Detected lcore 25 as core 1 on socket 0 00:05:15.226 EAL: Detected lcore 26 as core 2 on socket 0 00:05:15.226 EAL: Detected lcore 27 as core 3 on socket 0 00:05:15.226 EAL: Detected lcore 28 as core 4 on socket 0 00:05:15.226 EAL: Detected lcore 29 as core 5 on socket 0 00:05:15.226 EAL: Detected lcore 30 as core 8 on socket 0 00:05:15.226 EAL: Detected lcore 31 as core 9 on socket 0 00:05:15.226 EAL: Detected lcore 32 as core 10 on socket 0 00:05:15.226 EAL: Detected lcore 33 as core 11 on socket 0 00:05:15.226 EAL: Detected lcore 34 as core 12 on socket 0 00:05:15.226 EAL: Detected lcore 35 as core 13 on socket 0 00:05:15.226 EAL: Detected lcore 36 as core 0 on socket 1 00:05:15.226 EAL: Detected lcore 37 as core 1 on socket 1 00:05:15.226 EAL: Detected lcore 38 as core 2 on socket 1 00:05:15.226 EAL: Detected lcore 39 as core 3 on socket 1 00:05:15.226 EAL: Detected lcore 40 as core 4 on socket 1 00:05:15.226 EAL: Detected lcore 41 as core 5 on socket 1 00:05:15.226 EAL: Detected lcore 42 as core 8 on socket 1 00:05:15.226 EAL: Detected lcore 43 as core 9 on socket 1 00:05:15.226 EAL: Detected lcore 44 as core 10 on socket 1 00:05:15.226 EAL: Detected lcore 45 as core 11 on socket 1 00:05:15.226 EAL: Detected lcore 46 as core 12 on socket 1 00:05:15.226 EAL: Detected lcore 47 as core 13 on socket 1 00:05:15.226 EAL: Maximum logical cores by configuration: 128 00:05:15.226 EAL: Detected CPU lcores: 48 00:05:15.226 EAL: Detected NUMA nodes: 2 00:05:15.226 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:15.226 EAL: Detected shared linkage of DPDK 00:05:15.226 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:15.226 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:15.226 EAL: Registered [vdev] bus. 00:05:15.226 EAL: bus.vdev log level changed from disabled to notice 00:05:15.226 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:15.226 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:15.226 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:15.226 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:15.226 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:15.226 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:15.226 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:15.226 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:15.226 EAL: No shared files mode enabled, IPC will be disabled 00:05:15.226 EAL: No shared files mode enabled, IPC is disabled 00:05:15.226 EAL: Bus pci wants IOVA as 'DC' 00:05:15.226 EAL: Bus vdev wants IOVA as 'DC' 00:05:15.226 EAL: Buses did not request a specific IOVA mode. 00:05:15.226 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:15.226 EAL: Selected IOVA mode 'VA' 00:05:15.226 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.226 EAL: Probing VFIO support... 00:05:15.226 EAL: IOMMU type 1 (Type 1) is supported 00:05:15.226 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:15.226 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:15.226 EAL: VFIO support initialized 00:05:15.226 EAL: Ask a virtual area of 0x2e000 bytes 00:05:15.226 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:15.226 EAL: Setting up physically contiguous memory... 00:05:15.226 EAL: Setting maximum number of open files to 524288 00:05:15.226 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:15.226 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:15.226 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:15.226 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.226 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:15.227 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.227 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.227 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:15.227 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:15.227 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.227 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:15.227 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.227 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.227 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:15.227 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:15.227 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.227 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:15.227 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.227 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.227 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:15.227 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:15.227 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.227 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:15.227 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.227 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.227 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:15.227 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:15.227 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:15.227 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.227 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:15.227 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:15.227 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.227 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:15.227 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:15.227 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.227 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:15.227 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:15.227 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.227 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:15.227 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:15.227 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.227 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:15.227 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:15.227 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.227 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:15.227 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:15.227 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.227 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:15.227 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:15.227 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.227 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:15.227 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:15.227 EAL: Hugepages will be freed exactly as allocated. 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: TSC frequency is ~2700000 KHz 00:05:15.227 EAL: Main lcore 0 is ready (tid=7f2118764a00;cpuset=[0]) 00:05:15.227 EAL: Trying to obtain current memory policy. 00:05:15.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.227 EAL: Restoring previous memory policy: 0 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was expanded by 2MB 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:15.227 EAL: Mem event callback 'spdk:(nil)' registered 00:05:15.227 00:05:15.227 00:05:15.227 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.227 http://cunit.sourceforge.net/ 00:05:15.227 00:05:15.227 00:05:15.227 Suite: components_suite 00:05:15.227 Test: vtophys_malloc_test ...passed 00:05:15.227 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:15.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.227 EAL: Restoring previous memory policy: 4 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was expanded by 4MB 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was shrunk by 4MB 00:05:15.227 EAL: Trying to obtain current memory policy. 00:05:15.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.227 EAL: Restoring previous memory policy: 4 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was expanded by 6MB 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was shrunk by 6MB 00:05:15.227 EAL: Trying to obtain current memory policy. 00:05:15.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.227 EAL: Restoring previous memory policy: 4 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was expanded by 10MB 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was shrunk by 10MB 00:05:15.227 EAL: Trying to obtain current memory policy. 00:05:15.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.227 EAL: Restoring previous memory policy: 4 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was expanded by 18MB 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was shrunk by 18MB 00:05:15.227 EAL: Trying to obtain current memory policy. 00:05:15.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.227 EAL: Restoring previous memory policy: 4 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was expanded by 34MB 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was shrunk by 34MB 00:05:15.227 EAL: Trying to obtain current memory policy. 00:05:15.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.227 EAL: Restoring previous memory policy: 4 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was expanded by 66MB 00:05:15.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.227 EAL: request: mp_malloc_sync 00:05:15.227 EAL: No shared files mode enabled, IPC is disabled 00:05:15.227 EAL: Heap on socket 0 was shrunk by 66MB 00:05:15.227 EAL: Trying to obtain current memory policy. 00:05:15.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.486 EAL: Restoring previous memory policy: 4 00:05:15.486 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.486 EAL: request: mp_malloc_sync 00:05:15.486 EAL: No shared files mode enabled, IPC is disabled 00:05:15.486 EAL: Heap on socket 0 was expanded by 130MB 00:05:15.487 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.487 EAL: request: mp_malloc_sync 00:05:15.487 EAL: No shared files mode enabled, IPC is disabled 00:05:15.487 EAL: Heap on socket 0 was shrunk by 130MB 00:05:15.487 EAL: Trying to obtain current memory policy. 00:05:15.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.487 EAL: Restoring previous memory policy: 4 00:05:15.487 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.487 EAL: request: mp_malloc_sync 00:05:15.487 EAL: No shared files mode enabled, IPC is disabled 00:05:15.487 EAL: Heap on socket 0 was expanded by 258MB 00:05:15.487 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.487 EAL: request: mp_malloc_sync 00:05:15.487 EAL: No shared files mode enabled, IPC is disabled 00:05:15.487 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.487 EAL: Trying to obtain current memory policy. 00:05:15.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.745 EAL: Restoring previous memory policy: 4 00:05:15.745 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.745 EAL: request: mp_malloc_sync 00:05:15.745 EAL: No shared files mode enabled, IPC is disabled 00:05:15.745 EAL: Heap on socket 0 was expanded by 514MB 00:05:15.745 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.004 EAL: request: mp_malloc_sync 00:05:16.004 EAL: No shared files mode enabled, IPC is disabled 00:05:16.004 EAL: Heap on socket 0 was shrunk by 514MB 00:05:16.004 EAL: Trying to obtain current memory policy. 00:05:16.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.264 EAL: Restoring previous memory policy: 4 00:05:16.264 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.264 EAL: request: mp_malloc_sync 00:05:16.264 EAL: No shared files mode enabled, IPC is disabled 00:05:16.264 EAL: Heap on socket 0 was expanded by 1026MB 00:05:16.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.779 EAL: request: mp_malloc_sync 00:05:16.779 EAL: No shared files mode enabled, IPC is disabled 00:05:16.779 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:16.779 passed 00:05:16.779 00:05:16.779 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.779 suites 1 1 n/a 0 0 00:05:16.779 tests 2 2 2 0 0 00:05:16.779 asserts 497 497 497 0 n/a 00:05:16.779 00:05:16.779 Elapsed time = 1.390 seconds 00:05:16.779 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.779 EAL: request: mp_malloc_sync 00:05:16.779 EAL: No shared files mode enabled, IPC is disabled 00:05:16.779 EAL: Heap on socket 0 was shrunk by 2MB 00:05:16.779 EAL: No shared files mode enabled, IPC is disabled 00:05:16.779 EAL: No shared files mode enabled, IPC is disabled 00:05:16.779 EAL: No shared files mode enabled, IPC is disabled 00:05:16.779 00:05:16.779 real 0m1.501s 00:05:16.779 user 0m0.863s 00:05:16.779 sys 0m0.608s 00:05:16.779 04:21:36 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.779 04:21:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:16.779 ************************************ 00:05:16.779 END TEST env_vtophys 00:05:16.779 ************************************ 00:05:16.779 04:21:36 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:16.779 04:21:36 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.779 04:21:36 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.779 04:21:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.779 ************************************ 00:05:16.779 START TEST env_pci 00:05:16.779 ************************************ 00:05:16.779 04:21:36 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:16.779 00:05:16.779 00:05:16.779 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.779 http://cunit.sourceforge.net/ 00:05:16.779 00:05:16.779 00:05:16.779 Suite: pci 00:05:16.780 Test: pci_hook ...[2024-07-14 04:21:36.800071] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2656855 has claimed it 00:05:16.780 EAL: Cannot find device (10000:00:01.0) 00:05:16.780 EAL: Failed to attach device on primary process 00:05:16.780 passed 00:05:16.780 00:05:16.780 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.780 suites 1 1 n/a 0 0 00:05:16.780 tests 1 1 1 0 0 00:05:16.780 asserts 25 25 25 0 n/a 00:05:16.780 00:05:16.780 Elapsed time = 0.021 seconds 00:05:16.780 00:05:16.780 real 0m0.032s 00:05:16.780 user 0m0.009s 00:05:16.780 sys 0m0.023s 00:05:16.780 04:21:36 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.780 04:21:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:16.780 ************************************ 00:05:16.780 END TEST env_pci 00:05:16.780 ************************************ 00:05:16.780 04:21:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:16.780 04:21:36 env -- env/env.sh@15 -- # uname 00:05:16.780 04:21:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:16.780 04:21:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:16.780 04:21:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.780 04:21:36 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:16.780 04:21:36 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.780 04:21:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.780 ************************************ 00:05:16.780 START TEST env_dpdk_post_init 00:05:16.780 ************************************ 00:05:16.780 04:21:36 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.780 EAL: Detected CPU lcores: 48 00:05:16.780 EAL: Detected NUMA nodes: 2 00:05:16.780 EAL: Detected shared linkage of DPDK 00:05:16.780 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.780 EAL: Selected IOVA mode 'VA' 00:05:16.780 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.780 EAL: VFIO support initialized 00:05:16.780 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.038 EAL: Using IOMMU type 1 (Type 1) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:17.038 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:17.971 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:21.252 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:21.252 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:21.252 Starting DPDK initialization... 00:05:21.252 Starting SPDK post initialization... 00:05:21.252 SPDK NVMe probe 00:05:21.252 Attaching to 0000:88:00.0 00:05:21.252 Attached to 0000:88:00.0 00:05:21.252 Cleaning up... 00:05:21.252 00:05:21.252 real 0m4.423s 00:05:21.252 user 0m3.285s 00:05:21.252 sys 0m0.197s 00:05:21.252 04:21:41 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.252 04:21:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.252 ************************************ 00:05:21.252 END TEST env_dpdk_post_init 00:05:21.252 ************************************ 00:05:21.252 04:21:41 env -- env/env.sh@26 -- # uname 00:05:21.252 04:21:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:21.252 04:21:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:21.252 04:21:41 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.252 04:21:41 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.252 04:21:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.252 ************************************ 00:05:21.252 START TEST env_mem_callbacks 00:05:21.252 ************************************ 00:05:21.252 04:21:41 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:21.252 EAL: Detected CPU lcores: 48 00:05:21.252 EAL: Detected NUMA nodes: 2 00:05:21.252 EAL: Detected shared linkage of DPDK 00:05:21.252 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.252 EAL: Selected IOVA mode 'VA' 00:05:21.252 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.252 EAL: VFIO support initialized 00:05:21.252 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.252 00:05:21.252 00:05:21.252 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.252 http://cunit.sourceforge.net/ 00:05:21.252 00:05:21.252 00:05:21.252 Suite: memory 00:05:21.252 Test: test ... 00:05:21.252 register 0x200000200000 2097152 00:05:21.252 malloc 3145728 00:05:21.252 register 0x200000400000 4194304 00:05:21.252 buf 0x200000500000 len 3145728 PASSED 00:05:21.252 malloc 64 00:05:21.252 buf 0x2000004fff40 len 64 PASSED 00:05:21.252 malloc 4194304 00:05:21.252 register 0x200000800000 6291456 00:05:21.252 buf 0x200000a00000 len 4194304 PASSED 00:05:21.252 free 0x200000500000 3145728 00:05:21.252 free 0x2000004fff40 64 00:05:21.252 unregister 0x200000400000 4194304 PASSED 00:05:21.252 free 0x200000a00000 4194304 00:05:21.252 unregister 0x200000800000 6291456 PASSED 00:05:21.252 malloc 8388608 00:05:21.252 register 0x200000400000 10485760 00:05:21.252 buf 0x200000600000 len 8388608 PASSED 00:05:21.252 free 0x200000600000 8388608 00:05:21.252 unregister 0x200000400000 10485760 PASSED 00:05:21.252 passed 00:05:21.252 00:05:21.252 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.252 suites 1 1 n/a 0 0 00:05:21.252 tests 1 1 1 0 0 00:05:21.252 asserts 15 15 15 0 n/a 00:05:21.252 00:05:21.252 Elapsed time = 0.005 seconds 00:05:21.252 00:05:21.252 real 0m0.046s 00:05:21.252 user 0m0.008s 00:05:21.252 sys 0m0.038s 00:05:21.252 04:21:41 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.252 04:21:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:21.252 ************************************ 00:05:21.252 END TEST env_mem_callbacks 00:05:21.252 ************************************ 00:05:21.252 00:05:21.252 real 0m6.446s 00:05:21.252 user 0m4.410s 00:05:21.252 sys 0m1.082s 00:05:21.252 04:21:41 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.252 04:21:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.252 ************************************ 00:05:21.252 END TEST env 00:05:21.252 ************************************ 00:05:21.252 04:21:41 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:21.252 04:21:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.252 04:21:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.252 04:21:41 -- common/autotest_common.sh@10 -- # set +x 00:05:21.512 ************************************ 00:05:21.512 START TEST rpc 00:05:21.512 ************************************ 00:05:21.512 04:21:41 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:21.512 * Looking for test storage... 00:05:21.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.512 04:21:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2657618 00:05:21.512 04:21:41 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:21.512 04:21:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.512 04:21:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2657618 00:05:21.512 04:21:41 rpc -- common/autotest_common.sh@827 -- # '[' -z 2657618 ']' 00:05:21.512 04:21:41 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.512 04:21:41 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.512 04:21:41 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.512 04:21:41 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.512 04:21:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.512 [2024-07-14 04:21:41.554584] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:21.512 [2024-07-14 04:21:41.554683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657618 ] 00:05:21.512 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.512 [2024-07-14 04:21:41.611679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.512 [2024-07-14 04:21:41.695231] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:21.512 [2024-07-14 04:21:41.695287] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2657618' to capture a snapshot of events at runtime. 00:05:21.512 [2024-07-14 04:21:41.695310] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:21.512 [2024-07-14 04:21:41.695321] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:21.512 [2024-07-14 04:21:41.695330] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2657618 for offline analysis/debug. 00:05:21.512 [2024-07-14 04:21:41.695364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.771 04:21:41 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.771 04:21:41 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:21.771 04:21:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.771 04:21:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.771 04:21:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:21.771 04:21:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:21.771 04:21:41 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.771 04:21:41 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.771 04:21:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.030 ************************************ 00:05:22.030 START TEST rpc_integrity 00:05:22.030 ************************************ 00:05:22.030 04:21:41 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:22.030 04:21:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.030 04:21:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.030 04:21:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.030 04:21:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.030 04:21:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.030 04:21:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.030 { 00:05:22.030 "name": "Malloc0", 00:05:22.030 "aliases": [ 00:05:22.030 "e0dd5266-e974-4d50-b337-38f252c4a19c" 00:05:22.030 ], 00:05:22.030 "product_name": "Malloc disk", 00:05:22.030 "block_size": 512, 00:05:22.030 "num_blocks": 16384, 00:05:22.030 "uuid": "e0dd5266-e974-4d50-b337-38f252c4a19c", 00:05:22.030 "assigned_rate_limits": { 00:05:22.030 "rw_ios_per_sec": 0, 00:05:22.030 "rw_mbytes_per_sec": 0, 00:05:22.030 "r_mbytes_per_sec": 0, 00:05:22.030 "w_mbytes_per_sec": 0 00:05:22.030 }, 00:05:22.030 "claimed": false, 00:05:22.030 "zoned": false, 00:05:22.030 "supported_io_types": { 00:05:22.030 "read": true, 00:05:22.030 "write": true, 00:05:22.030 "unmap": true, 00:05:22.030 "write_zeroes": true, 00:05:22.030 "flush": true, 00:05:22.030 "reset": true, 00:05:22.030 "compare": false, 00:05:22.030 "compare_and_write": false, 00:05:22.030 "abort": true, 00:05:22.030 "nvme_admin": false, 00:05:22.030 "nvme_io": false 00:05:22.030 }, 00:05:22.030 "memory_domains": [ 00:05:22.030 { 00:05:22.030 "dma_device_id": "system", 00:05:22.030 "dma_device_type": 1 00:05:22.030 }, 00:05:22.030 { 00:05:22.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.030 "dma_device_type": 2 00:05:22.030 } 00:05:22.030 ], 00:05:22.030 "driver_specific": {} 00:05:22.030 } 00:05:22.030 ]' 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.030 [2024-07-14 04:21:42.089703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:22.030 [2024-07-14 04:21:42.089752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.030 [2024-07-14 04:21:42.089777] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23768f0 00:05:22.030 [2024-07-14 04:21:42.089792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.030 [2024-07-14 04:21:42.091238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.030 [2024-07-14 04:21:42.091268] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.030 Passthru0 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.030 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.030 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.030 { 00:05:22.030 "name": "Malloc0", 00:05:22.030 "aliases": [ 00:05:22.030 "e0dd5266-e974-4d50-b337-38f252c4a19c" 00:05:22.030 ], 00:05:22.030 "product_name": "Malloc disk", 00:05:22.030 "block_size": 512, 00:05:22.030 "num_blocks": 16384, 00:05:22.030 "uuid": "e0dd5266-e974-4d50-b337-38f252c4a19c", 00:05:22.030 "assigned_rate_limits": { 00:05:22.030 "rw_ios_per_sec": 0, 00:05:22.030 "rw_mbytes_per_sec": 0, 00:05:22.030 "r_mbytes_per_sec": 0, 00:05:22.030 "w_mbytes_per_sec": 0 00:05:22.030 }, 00:05:22.030 "claimed": true, 00:05:22.030 "claim_type": "exclusive_write", 00:05:22.030 "zoned": false, 00:05:22.030 "supported_io_types": { 00:05:22.030 "read": true, 00:05:22.030 "write": true, 00:05:22.030 "unmap": true, 00:05:22.030 "write_zeroes": true, 00:05:22.030 "flush": true, 00:05:22.030 "reset": true, 00:05:22.030 "compare": false, 00:05:22.030 "compare_and_write": false, 00:05:22.030 "abort": true, 00:05:22.030 "nvme_admin": false, 00:05:22.030 "nvme_io": false 00:05:22.030 }, 00:05:22.030 "memory_domains": [ 00:05:22.030 { 00:05:22.030 "dma_device_id": "system", 00:05:22.030 "dma_device_type": 1 00:05:22.030 }, 00:05:22.030 { 00:05:22.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.030 "dma_device_type": 2 00:05:22.030 } 00:05:22.030 ], 00:05:22.030 "driver_specific": {} 00:05:22.030 }, 00:05:22.030 { 00:05:22.030 "name": "Passthru0", 00:05:22.030 "aliases": [ 00:05:22.030 "4d7a6a9a-fd4e-5667-98eb-1565edaebfbf" 00:05:22.030 ], 00:05:22.030 "product_name": "passthru", 00:05:22.030 "block_size": 512, 00:05:22.031 "num_blocks": 16384, 00:05:22.031 "uuid": "4d7a6a9a-fd4e-5667-98eb-1565edaebfbf", 00:05:22.031 "assigned_rate_limits": { 00:05:22.031 "rw_ios_per_sec": 0, 00:05:22.031 "rw_mbytes_per_sec": 0, 00:05:22.031 "r_mbytes_per_sec": 0, 00:05:22.031 "w_mbytes_per_sec": 0 00:05:22.031 }, 00:05:22.031 "claimed": false, 00:05:22.031 "zoned": false, 00:05:22.031 "supported_io_types": { 00:05:22.031 "read": true, 00:05:22.031 "write": true, 00:05:22.031 "unmap": true, 00:05:22.031 "write_zeroes": true, 00:05:22.031 "flush": true, 00:05:22.031 "reset": true, 00:05:22.031 "compare": false, 00:05:22.031 "compare_and_write": false, 00:05:22.031 "abort": true, 00:05:22.031 "nvme_admin": false, 00:05:22.031 "nvme_io": false 00:05:22.031 }, 00:05:22.031 "memory_domains": [ 00:05:22.031 { 00:05:22.031 "dma_device_id": "system", 00:05:22.031 "dma_device_type": 1 00:05:22.031 }, 00:05:22.031 { 00:05:22.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.031 "dma_device_type": 2 00:05:22.031 } 00:05:22.031 ], 00:05:22.031 "driver_specific": { 00:05:22.031 "passthru": { 00:05:22.031 "name": "Passthru0", 00:05:22.031 "base_bdev_name": "Malloc0" 00:05:22.031 } 00:05:22.031 } 00:05:22.031 } 00:05:22.031 ]' 00:05:22.031 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.031 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.031 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.031 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.031 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.031 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.031 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.031 04:21:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.031 00:05:22.031 real 0m0.232s 00:05:22.031 user 0m0.153s 00:05:22.031 sys 0m0.021s 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.031 04:21:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.031 ************************************ 00:05:22.031 END TEST rpc_integrity 00:05:22.031 ************************************ 00:05:22.289 04:21:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:22.289 04:21:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.289 04:21:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.289 04:21:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.289 ************************************ 00:05:22.289 START TEST rpc_plugins 00:05:22.289 ************************************ 00:05:22.289 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:22.289 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:22.289 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.289 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.289 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.289 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:22.289 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:22.289 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.289 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.289 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.289 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:22.289 { 00:05:22.289 "name": "Malloc1", 00:05:22.289 "aliases": [ 00:05:22.289 "6540132c-e1cd-4242-91d1-fb12a38c4bc5" 00:05:22.289 ], 00:05:22.289 "product_name": "Malloc disk", 00:05:22.289 "block_size": 4096, 00:05:22.289 "num_blocks": 256, 00:05:22.289 "uuid": "6540132c-e1cd-4242-91d1-fb12a38c4bc5", 00:05:22.289 "assigned_rate_limits": { 00:05:22.289 "rw_ios_per_sec": 0, 00:05:22.289 "rw_mbytes_per_sec": 0, 00:05:22.289 "r_mbytes_per_sec": 0, 00:05:22.289 "w_mbytes_per_sec": 0 00:05:22.289 }, 00:05:22.289 "claimed": false, 00:05:22.290 "zoned": false, 00:05:22.290 "supported_io_types": { 00:05:22.290 "read": true, 00:05:22.290 "write": true, 00:05:22.290 "unmap": true, 00:05:22.290 "write_zeroes": true, 00:05:22.290 "flush": true, 00:05:22.290 "reset": true, 00:05:22.290 "compare": false, 00:05:22.290 "compare_and_write": false, 00:05:22.290 "abort": true, 00:05:22.290 "nvme_admin": false, 00:05:22.290 "nvme_io": false 00:05:22.290 }, 00:05:22.290 "memory_domains": [ 00:05:22.290 { 00:05:22.290 "dma_device_id": "system", 00:05:22.290 "dma_device_type": 1 00:05:22.290 }, 00:05:22.290 { 00:05:22.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.290 "dma_device_type": 2 00:05:22.290 } 00:05:22.290 ], 00:05:22.290 "driver_specific": {} 00:05:22.290 } 00:05:22.290 ]' 00:05:22.290 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:22.290 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:22.290 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:22.290 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.290 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.290 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.290 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:22.290 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.290 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.290 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.290 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:22.290 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:22.290 04:21:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:22.290 00:05:22.290 real 0m0.109s 00:05:22.290 user 0m0.071s 00:05:22.290 sys 0m0.011s 00:05:22.290 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.290 04:21:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.290 ************************************ 00:05:22.290 END TEST rpc_plugins 00:05:22.290 ************************************ 00:05:22.290 04:21:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:22.290 04:21:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.290 04:21:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.290 04:21:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.290 ************************************ 00:05:22.290 START TEST rpc_trace_cmd_test 00:05:22.290 ************************************ 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:22.290 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2657618", 00:05:22.290 "tpoint_group_mask": "0x8", 00:05:22.290 "iscsi_conn": { 00:05:22.290 "mask": "0x2", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "scsi": { 00:05:22.290 "mask": "0x4", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "bdev": { 00:05:22.290 "mask": "0x8", 00:05:22.290 "tpoint_mask": "0xffffffffffffffff" 00:05:22.290 }, 00:05:22.290 "nvmf_rdma": { 00:05:22.290 "mask": "0x10", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "nvmf_tcp": { 00:05:22.290 "mask": "0x20", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "ftl": { 00:05:22.290 "mask": "0x40", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "blobfs": { 00:05:22.290 "mask": "0x80", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "dsa": { 00:05:22.290 "mask": "0x200", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "thread": { 00:05:22.290 "mask": "0x400", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "nvme_pcie": { 00:05:22.290 "mask": "0x800", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "iaa": { 00:05:22.290 "mask": "0x1000", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "nvme_tcp": { 00:05:22.290 "mask": "0x2000", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "bdev_nvme": { 00:05:22.290 "mask": "0x4000", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 }, 00:05:22.290 "sock": { 00:05:22.290 "mask": "0x8000", 00:05:22.290 "tpoint_mask": "0x0" 00:05:22.290 } 00:05:22.290 }' 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:22.290 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:22.549 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:22.549 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:22.549 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:22.549 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:22.549 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:22.549 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:22.549 04:21:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:22.549 00:05:22.549 real 0m0.197s 00:05:22.549 user 0m0.175s 00:05:22.549 sys 0m0.015s 00:05:22.549 04:21:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.549 04:21:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.549 ************************************ 00:05:22.549 END TEST rpc_trace_cmd_test 00:05:22.549 ************************************ 00:05:22.549 04:21:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:22.549 04:21:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:22.549 04:21:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:22.549 04:21:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.549 04:21:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.549 04:21:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.549 ************************************ 00:05:22.549 START TEST rpc_daemon_integrity 00:05:22.549 ************************************ 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.549 { 00:05:22.549 "name": "Malloc2", 00:05:22.549 "aliases": [ 00:05:22.549 "383a7e8b-155a-4d3e-9ed6-db17c411e3d3" 00:05:22.549 ], 00:05:22.549 "product_name": "Malloc disk", 00:05:22.549 "block_size": 512, 00:05:22.549 "num_blocks": 16384, 00:05:22.549 "uuid": "383a7e8b-155a-4d3e-9ed6-db17c411e3d3", 00:05:22.549 "assigned_rate_limits": { 00:05:22.549 "rw_ios_per_sec": 0, 00:05:22.549 "rw_mbytes_per_sec": 0, 00:05:22.549 "r_mbytes_per_sec": 0, 00:05:22.549 "w_mbytes_per_sec": 0 00:05:22.549 }, 00:05:22.549 "claimed": false, 00:05:22.549 "zoned": false, 00:05:22.549 "supported_io_types": { 00:05:22.549 "read": true, 00:05:22.549 "write": true, 00:05:22.549 "unmap": true, 00:05:22.549 "write_zeroes": true, 00:05:22.549 "flush": true, 00:05:22.549 "reset": true, 00:05:22.549 "compare": false, 00:05:22.549 "compare_and_write": false, 00:05:22.549 "abort": true, 00:05:22.549 "nvme_admin": false, 00:05:22.549 "nvme_io": false 00:05:22.549 }, 00:05:22.549 "memory_domains": [ 00:05:22.549 { 00:05:22.549 "dma_device_id": "system", 00:05:22.549 "dma_device_type": 1 00:05:22.549 }, 00:05:22.549 { 00:05:22.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.549 "dma_device_type": 2 00:05:22.549 } 00:05:22.549 ], 00:05:22.549 "driver_specific": {} 00:05:22.549 } 00:05:22.549 ]' 00:05:22.549 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.808 [2024-07-14 04:21:42.768594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:22.808 [2024-07-14 04:21:42.768642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.808 [2024-07-14 04:21:42.768668] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2271600 00:05:22.808 [2024-07-14 04:21:42.768683] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.808 [2024-07-14 04:21:42.770116] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.808 [2024-07-14 04:21:42.770143] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.808 Passthru0 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.808 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.808 { 00:05:22.808 "name": "Malloc2", 00:05:22.808 "aliases": [ 00:05:22.808 "383a7e8b-155a-4d3e-9ed6-db17c411e3d3" 00:05:22.808 ], 00:05:22.808 "product_name": "Malloc disk", 00:05:22.808 "block_size": 512, 00:05:22.808 "num_blocks": 16384, 00:05:22.808 "uuid": "383a7e8b-155a-4d3e-9ed6-db17c411e3d3", 00:05:22.808 "assigned_rate_limits": { 00:05:22.808 "rw_ios_per_sec": 0, 00:05:22.808 "rw_mbytes_per_sec": 0, 00:05:22.808 "r_mbytes_per_sec": 0, 00:05:22.808 "w_mbytes_per_sec": 0 00:05:22.808 }, 00:05:22.808 "claimed": true, 00:05:22.808 "claim_type": "exclusive_write", 00:05:22.808 "zoned": false, 00:05:22.808 "supported_io_types": { 00:05:22.808 "read": true, 00:05:22.808 "write": true, 00:05:22.808 "unmap": true, 00:05:22.808 "write_zeroes": true, 00:05:22.808 "flush": true, 00:05:22.808 "reset": true, 00:05:22.808 "compare": false, 00:05:22.808 "compare_and_write": false, 00:05:22.808 "abort": true, 00:05:22.808 "nvme_admin": false, 00:05:22.808 "nvme_io": false 00:05:22.808 }, 00:05:22.808 "memory_domains": [ 00:05:22.808 { 00:05:22.808 "dma_device_id": "system", 00:05:22.808 "dma_device_type": 1 00:05:22.808 }, 00:05:22.808 { 00:05:22.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.808 "dma_device_type": 2 00:05:22.808 } 00:05:22.808 ], 00:05:22.808 "driver_specific": {} 00:05:22.808 }, 00:05:22.808 { 00:05:22.808 "name": "Passthru0", 00:05:22.808 "aliases": [ 00:05:22.808 "786f29ce-850a-5dc5-8703-5963734dca1a" 00:05:22.808 ], 00:05:22.808 "product_name": "passthru", 00:05:22.808 "block_size": 512, 00:05:22.808 "num_blocks": 16384, 00:05:22.808 "uuid": "786f29ce-850a-5dc5-8703-5963734dca1a", 00:05:22.808 "assigned_rate_limits": { 00:05:22.808 "rw_ios_per_sec": 0, 00:05:22.808 "rw_mbytes_per_sec": 0, 00:05:22.808 "r_mbytes_per_sec": 0, 00:05:22.808 "w_mbytes_per_sec": 0 00:05:22.808 }, 00:05:22.808 "claimed": false, 00:05:22.808 "zoned": false, 00:05:22.808 "supported_io_types": { 00:05:22.808 "read": true, 00:05:22.808 "write": true, 00:05:22.808 "unmap": true, 00:05:22.808 "write_zeroes": true, 00:05:22.808 "flush": true, 00:05:22.808 "reset": true, 00:05:22.808 "compare": false, 00:05:22.808 "compare_and_write": false, 00:05:22.808 "abort": true, 00:05:22.808 "nvme_admin": false, 00:05:22.808 "nvme_io": false 00:05:22.808 }, 00:05:22.808 "memory_domains": [ 00:05:22.809 { 00:05:22.809 "dma_device_id": "system", 00:05:22.809 "dma_device_type": 1 00:05:22.809 }, 00:05:22.809 { 00:05:22.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.809 "dma_device_type": 2 00:05:22.809 } 00:05:22.809 ], 00:05:22.809 "driver_specific": { 00:05:22.809 "passthru": { 00:05:22.809 "name": "Passthru0", 00:05:22.809 "base_bdev_name": "Malloc2" 00:05:22.809 } 00:05:22.809 } 00:05:22.809 } 00:05:22.809 ]' 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.809 00:05:22.809 real 0m0.228s 00:05:22.809 user 0m0.148s 00:05:22.809 sys 0m0.022s 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.809 04:21:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.809 ************************************ 00:05:22.809 END TEST rpc_daemon_integrity 00:05:22.809 ************************************ 00:05:22.809 04:21:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:22.809 04:21:42 rpc -- rpc/rpc.sh@84 -- # killprocess 2657618 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@946 -- # '[' -z 2657618 ']' 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@950 -- # kill -0 2657618 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@951 -- # uname 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2657618 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2657618' 00:05:22.809 killing process with pid 2657618 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@965 -- # kill 2657618 00:05:22.809 04:21:42 rpc -- common/autotest_common.sh@970 -- # wait 2657618 00:05:23.376 00:05:23.376 real 0m1.894s 00:05:23.376 user 0m2.369s 00:05:23.376 sys 0m0.586s 00:05:23.376 04:21:43 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.376 04:21:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.376 ************************************ 00:05:23.376 END TEST rpc 00:05:23.376 ************************************ 00:05:23.376 04:21:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.376 04:21:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.376 04:21:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.376 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:05:23.376 ************************************ 00:05:23.376 START TEST skip_rpc 00:05:23.376 ************************************ 00:05:23.376 04:21:43 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.376 * Looking for test storage... 00:05:23.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.376 04:21:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:23.376 04:21:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:23.376 04:21:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:23.376 04:21:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.376 04:21:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.376 04:21:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.376 ************************************ 00:05:23.376 START TEST skip_rpc 00:05:23.376 ************************************ 00:05:23.376 04:21:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:23.376 04:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2657949 00:05:23.376 04:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:23.376 04:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.376 04:21:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:23.376 [2024-07-14 04:21:43.526194] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:23.376 [2024-07-14 04:21:43.526256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657949 ] 00:05:23.376 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.634 [2024-07-14 04:21:43.584466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.634 [2024-07-14 04:21:43.674548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2657949 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 2657949 ']' 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 2657949 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2657949 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2657949' 00:05:28.922 killing process with pid 2657949 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 2657949 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 2657949 00:05:28.922 00:05:28.922 real 0m5.440s 00:05:28.922 user 0m5.133s 00:05:28.922 sys 0m0.311s 00:05:28.922 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.923 04:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.923 ************************************ 00:05:28.923 END TEST skip_rpc 00:05:28.923 ************************************ 00:05:28.923 04:21:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:28.923 04:21:48 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.923 04:21:48 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.923 04:21:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.923 ************************************ 00:05:28.923 START TEST skip_rpc_with_json 00:05:28.923 ************************************ 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2658634 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2658634 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 2658634 ']' 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:28.923 04:21:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.923 [2024-07-14 04:21:49.012728] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:28.923 [2024-07-14 04:21:49.012815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2658634 ] 00:05:28.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.923 [2024-07-14 04:21:49.085974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.229 [2024-07-14 04:21:49.185413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.488 [2024-07-14 04:21:49.455321] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:29.488 request: 00:05:29.488 { 00:05:29.488 "trtype": "tcp", 00:05:29.488 "method": "nvmf_get_transports", 00:05:29.488 "req_id": 1 00:05:29.488 } 00:05:29.488 Got JSON-RPC error response 00:05:29.488 response: 00:05:29.488 { 00:05:29.488 "code": -19, 00:05:29.488 "message": "No such device" 00:05:29.488 } 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.488 [2024-07-14 04:21:49.463422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.488 04:21:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:29.488 { 00:05:29.488 "subsystems": [ 00:05:29.488 { 00:05:29.488 "subsystem": "vfio_user_target", 00:05:29.488 "config": null 00:05:29.488 }, 00:05:29.488 { 00:05:29.488 "subsystem": "keyring", 00:05:29.489 "config": [] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "iobuf", 00:05:29.489 "config": [ 00:05:29.489 { 00:05:29.489 "method": "iobuf_set_options", 00:05:29.489 "params": { 00:05:29.489 "small_pool_count": 8192, 00:05:29.489 "large_pool_count": 1024, 00:05:29.489 "small_bufsize": 8192, 00:05:29.489 "large_bufsize": 135168 00:05:29.489 } 00:05:29.489 } 00:05:29.489 ] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "sock", 00:05:29.489 "config": [ 00:05:29.489 { 00:05:29.489 "method": "sock_set_default_impl", 00:05:29.489 "params": { 00:05:29.489 "impl_name": "posix" 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "sock_impl_set_options", 00:05:29.489 "params": { 00:05:29.489 "impl_name": "ssl", 00:05:29.489 "recv_buf_size": 4096, 00:05:29.489 "send_buf_size": 4096, 00:05:29.489 "enable_recv_pipe": true, 00:05:29.489 "enable_quickack": false, 00:05:29.489 "enable_placement_id": 0, 00:05:29.489 "enable_zerocopy_send_server": true, 00:05:29.489 "enable_zerocopy_send_client": false, 00:05:29.489 "zerocopy_threshold": 0, 00:05:29.489 "tls_version": 0, 00:05:29.489 "enable_ktls": false 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "sock_impl_set_options", 00:05:29.489 "params": { 00:05:29.489 "impl_name": "posix", 00:05:29.489 "recv_buf_size": 2097152, 00:05:29.489 "send_buf_size": 2097152, 00:05:29.489 "enable_recv_pipe": true, 00:05:29.489 "enable_quickack": false, 00:05:29.489 "enable_placement_id": 0, 00:05:29.489 "enable_zerocopy_send_server": true, 00:05:29.489 "enable_zerocopy_send_client": false, 00:05:29.489 "zerocopy_threshold": 0, 00:05:29.489 "tls_version": 0, 00:05:29.489 "enable_ktls": false 00:05:29.489 } 00:05:29.489 } 00:05:29.489 ] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "vmd", 00:05:29.489 "config": [] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "accel", 00:05:29.489 "config": [ 00:05:29.489 { 00:05:29.489 "method": "accel_set_options", 00:05:29.489 "params": { 00:05:29.489 "small_cache_size": 128, 00:05:29.489 "large_cache_size": 16, 00:05:29.489 "task_count": 2048, 00:05:29.489 "sequence_count": 2048, 00:05:29.489 "buf_count": 2048 00:05:29.489 } 00:05:29.489 } 00:05:29.489 ] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "bdev", 00:05:29.489 "config": [ 00:05:29.489 { 00:05:29.489 "method": "bdev_set_options", 00:05:29.489 "params": { 00:05:29.489 "bdev_io_pool_size": 65535, 00:05:29.489 "bdev_io_cache_size": 256, 00:05:29.489 "bdev_auto_examine": true, 00:05:29.489 "iobuf_small_cache_size": 128, 00:05:29.489 "iobuf_large_cache_size": 16 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "bdev_raid_set_options", 00:05:29.489 "params": { 00:05:29.489 "process_window_size_kb": 1024 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "bdev_iscsi_set_options", 00:05:29.489 "params": { 00:05:29.489 "timeout_sec": 30 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "bdev_nvme_set_options", 00:05:29.489 "params": { 00:05:29.489 "action_on_timeout": "none", 00:05:29.489 "timeout_us": 0, 00:05:29.489 "timeout_admin_us": 0, 00:05:29.489 "keep_alive_timeout_ms": 10000, 00:05:29.489 "arbitration_burst": 0, 00:05:29.489 "low_priority_weight": 0, 00:05:29.489 "medium_priority_weight": 0, 00:05:29.489 "high_priority_weight": 0, 00:05:29.489 "nvme_adminq_poll_period_us": 10000, 00:05:29.489 "nvme_ioq_poll_period_us": 0, 00:05:29.489 "io_queue_requests": 0, 00:05:29.489 "delay_cmd_submit": true, 00:05:29.489 "transport_retry_count": 4, 00:05:29.489 "bdev_retry_count": 3, 00:05:29.489 "transport_ack_timeout": 0, 00:05:29.489 "ctrlr_loss_timeout_sec": 0, 00:05:29.489 "reconnect_delay_sec": 0, 00:05:29.489 "fast_io_fail_timeout_sec": 0, 00:05:29.489 "disable_auto_failback": false, 00:05:29.489 "generate_uuids": false, 00:05:29.489 "transport_tos": 0, 00:05:29.489 "nvme_error_stat": false, 00:05:29.489 "rdma_srq_size": 0, 00:05:29.489 "io_path_stat": false, 00:05:29.489 "allow_accel_sequence": false, 00:05:29.489 "rdma_max_cq_size": 0, 00:05:29.489 "rdma_cm_event_timeout_ms": 0, 00:05:29.489 "dhchap_digests": [ 00:05:29.489 "sha256", 00:05:29.489 "sha384", 00:05:29.489 "sha512" 00:05:29.489 ], 00:05:29.489 "dhchap_dhgroups": [ 00:05:29.489 "null", 00:05:29.489 "ffdhe2048", 00:05:29.489 "ffdhe3072", 00:05:29.489 "ffdhe4096", 00:05:29.489 "ffdhe6144", 00:05:29.489 "ffdhe8192" 00:05:29.489 ] 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "bdev_nvme_set_hotplug", 00:05:29.489 "params": { 00:05:29.489 "period_us": 100000, 00:05:29.489 "enable": false 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "bdev_wait_for_examine" 00:05:29.489 } 00:05:29.489 ] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "scsi", 00:05:29.489 "config": null 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "scheduler", 00:05:29.489 "config": [ 00:05:29.489 { 00:05:29.489 "method": "framework_set_scheduler", 00:05:29.489 "params": { 00:05:29.489 "name": "static" 00:05:29.489 } 00:05:29.489 } 00:05:29.489 ] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "vhost_scsi", 00:05:29.489 "config": [] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "vhost_blk", 00:05:29.489 "config": [] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "ublk", 00:05:29.489 "config": [] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "nbd", 00:05:29.489 "config": [] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "nvmf", 00:05:29.489 "config": [ 00:05:29.489 { 00:05:29.489 "method": "nvmf_set_config", 00:05:29.489 "params": { 00:05:29.489 "discovery_filter": "match_any", 00:05:29.489 "admin_cmd_passthru": { 00:05:29.489 "identify_ctrlr": false 00:05:29.489 } 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "nvmf_set_max_subsystems", 00:05:29.489 "params": { 00:05:29.489 "max_subsystems": 1024 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "nvmf_set_crdt", 00:05:29.489 "params": { 00:05:29.489 "crdt1": 0, 00:05:29.489 "crdt2": 0, 00:05:29.489 "crdt3": 0 00:05:29.489 } 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "method": "nvmf_create_transport", 00:05:29.489 "params": { 00:05:29.489 "trtype": "TCP", 00:05:29.489 "max_queue_depth": 128, 00:05:29.489 "max_io_qpairs_per_ctrlr": 127, 00:05:29.489 "in_capsule_data_size": 4096, 00:05:29.489 "max_io_size": 131072, 00:05:29.489 "io_unit_size": 131072, 00:05:29.489 "max_aq_depth": 128, 00:05:29.489 "num_shared_buffers": 511, 00:05:29.489 "buf_cache_size": 4294967295, 00:05:29.489 "dif_insert_or_strip": false, 00:05:29.489 "zcopy": false, 00:05:29.489 "c2h_success": true, 00:05:29.489 "sock_priority": 0, 00:05:29.489 "abort_timeout_sec": 1, 00:05:29.489 "ack_timeout": 0, 00:05:29.489 "data_wr_pool_size": 0 00:05:29.489 } 00:05:29.489 } 00:05:29.489 ] 00:05:29.489 }, 00:05:29.489 { 00:05:29.489 "subsystem": "iscsi", 00:05:29.489 "config": [ 00:05:29.489 { 00:05:29.489 "method": "iscsi_set_options", 00:05:29.489 "params": { 00:05:29.489 "node_base": "iqn.2016-06.io.spdk", 00:05:29.489 "max_sessions": 128, 00:05:29.489 "max_connections_per_session": 2, 00:05:29.489 "max_queue_depth": 64, 00:05:29.489 "default_time2wait": 2, 00:05:29.489 "default_time2retain": 20, 00:05:29.489 "first_burst_length": 8192, 00:05:29.489 "immediate_data": true, 00:05:29.489 "allow_duplicated_isid": false, 00:05:29.489 "error_recovery_level": 0, 00:05:29.489 "nop_timeout": 60, 00:05:29.489 "nop_in_interval": 30, 00:05:29.489 "disable_chap": false, 00:05:29.489 "require_chap": false, 00:05:29.489 "mutual_chap": false, 00:05:29.489 "chap_group": 0, 00:05:29.489 "max_large_datain_per_connection": 64, 00:05:29.489 "max_r2t_per_connection": 4, 00:05:29.489 "pdu_pool_size": 36864, 00:05:29.489 "immediate_data_pool_size": 16384, 00:05:29.489 "data_out_pool_size": 2048 00:05:29.489 } 00:05:29.489 } 00:05:29.489 ] 00:05:29.489 } 00:05:29.489 ] 00:05:29.489 } 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2658634 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2658634 ']' 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2658634 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2658634 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2658634' 00:05:29.489 killing process with pid 2658634 00:05:29.489 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2658634 00:05:29.490 04:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2658634 00:05:30.057 04:21:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2658777 00:05:30.057 04:21:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:30.057 04:21:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2658777 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2658777 ']' 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2658777 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2658777 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2658777' 00:05:35.317 killing process with pid 2658777 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2658777 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2658777 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:35.317 04:21:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:35.318 00:05:35.318 real 0m6.512s 00:05:35.318 user 0m6.161s 00:05:35.318 sys 0m0.724s 00:05:35.318 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.318 04:21:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.318 ************************************ 00:05:35.318 END TEST skip_rpc_with_json 00:05:35.318 ************************************ 00:05:35.318 04:21:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:35.318 04:21:55 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.318 04:21:55 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.318 04:21:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.575 ************************************ 00:05:35.575 START TEST skip_rpc_with_delay 00:05:35.575 ************************************ 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.575 [2024-07-14 04:21:55.583619] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:35.575 [2024-07-14 04:21:55.583725] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:35.575 00:05:35.575 real 0m0.071s 00:05:35.575 user 0m0.047s 00:05:35.575 sys 0m0.024s 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.575 04:21:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:35.575 ************************************ 00:05:35.575 END TEST skip_rpc_with_delay 00:05:35.575 ************************************ 00:05:35.575 04:21:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:35.575 04:21:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:35.575 04:21:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:35.575 04:21:55 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.575 04:21:55 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.575 04:21:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.575 ************************************ 00:05:35.575 START TEST exit_on_failed_rpc_init 00:05:35.575 ************************************ 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2659499 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2659499 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 2659499 ']' 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.575 04:21:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.575 [2024-07-14 04:21:55.697937] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:35.575 [2024-07-14 04:21:55.698041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659499 ] 00:05:35.575 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.575 [2024-07-14 04:21:55.760826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.832 [2024-07-14 04:21:55.850882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:36.089 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.089 [2024-07-14 04:21:56.163833] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:36.089 [2024-07-14 04:21:56.163956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659536 ] 00:05:36.089 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.089 [2024-07-14 04:21:56.228013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.347 [2024-07-14 04:21:56.322374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.347 [2024-07-14 04:21:56.322491] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:36.347 [2024-07-14 04:21:56.322514] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:36.347 [2024-07-14 04:21:56.322528] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2659499 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 2659499 ']' 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 2659499 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2659499 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2659499' 00:05:36.347 killing process with pid 2659499 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 2659499 00:05:36.347 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 2659499 00:05:36.912 00:05:36.912 real 0m1.204s 00:05:36.912 user 0m1.289s 00:05:36.912 sys 0m0.466s 00:05:36.912 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.912 04:21:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.912 ************************************ 00:05:36.912 END TEST exit_on_failed_rpc_init 00:05:36.912 ************************************ 00:05:36.912 04:21:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:36.912 00:05:36.912 real 0m13.475s 00:05:36.912 user 0m12.734s 00:05:36.912 sys 0m1.685s 00:05:36.912 04:21:56 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.912 04:21:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.912 ************************************ 00:05:36.912 END TEST skip_rpc 00:05:36.912 ************************************ 00:05:36.912 04:21:56 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.912 04:21:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.912 04:21:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.912 04:21:56 -- common/autotest_common.sh@10 -- # set +x 00:05:36.912 ************************************ 00:05:36.912 START TEST rpc_client 00:05:36.912 ************************************ 00:05:36.912 04:21:56 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.912 * Looking for test storage... 00:05:36.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:36.912 04:21:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:36.912 OK 00:05:36.912 04:21:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:36.912 00:05:36.912 real 0m0.072s 00:05:36.912 user 0m0.030s 00:05:36.912 sys 0m0.047s 00:05:36.912 04:21:56 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.912 04:21:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:36.912 ************************************ 00:05:36.912 END TEST rpc_client 00:05:36.912 ************************************ 00:05:36.912 04:21:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.912 04:21:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.912 04:21:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.912 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:05:36.912 ************************************ 00:05:36.912 START TEST json_config 00:05:36.912 ************************************ 00:05:36.912 04:21:57 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.912 04:21:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.912 04:21:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.912 04:21:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.912 04:21:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.912 04:21:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.912 04:21:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.912 04:21:57 json_config -- paths/export.sh@5 -- # export PATH 00:05:36.912 04:21:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@47 -- # : 0 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:36.912 04:21:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:36.912 04:21:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:36.913 04:21:57 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.913 04:21:57 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:36.913 INFO: JSON configuration test init 00:05:36.913 04:21:57 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:36.913 04:21:57 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:36.913 04:21:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:36.913 04:21:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.913 04:21:57 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:36.913 04:21:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:36.913 04:21:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.913 04:21:57 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:36.913 04:21:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.913 04:21:57 json_config -- json_config/common.sh@10 -- # shift 00:05:36.913 04:21:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.913 04:21:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.913 04:21:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.913 04:21:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.913 04:21:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.171 04:21:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2659748 00:05:37.171 04:21:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:37.171 04:21:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.171 Waiting for target to run... 00:05:37.171 04:21:57 json_config -- json_config/common.sh@25 -- # waitforlisten 2659748 /var/tmp/spdk_tgt.sock 00:05:37.171 04:21:57 json_config -- common/autotest_common.sh@827 -- # '[' -z 2659748 ']' 00:05:37.171 04:21:57 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.171 04:21:57 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.171 04:21:57 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.171 04:21:57 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.171 04:21:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.171 [2024-07-14 04:21:57.149660] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:37.171 [2024-07-14 04:21:57.149758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659748 ] 00:05:37.171 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.738 [2024-07-14 04:21:57.676704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.738 [2024-07-14 04:21:57.758657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.996 04:21:58 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.996 04:21:58 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:37.996 04:21:58 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.996 00:05:37.996 04:21:58 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:37.996 04:21:58 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:37.996 04:21:58 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:37.996 04:21:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.996 04:21:58 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:37.996 04:21:58 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:37.996 04:21:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.996 04:21:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.996 04:21:58 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:37.996 04:21:58 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:37.996 04:21:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:41.279 04:22:01 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:41.279 04:22:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:41.279 04:22:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.279 04:22:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.279 04:22:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:41.279 04:22:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:41.279 04:22:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:41.279 04:22:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:41.279 04:22:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:41.279 04:22:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:41.537 04:22:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.537 04:22:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:41.537 04:22:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.537 04:22:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:41.537 04:22:01 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.537 04:22:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.795 MallocForNvmf0 00:05:41.795 04:22:01 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.795 04:22:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:42.053 MallocForNvmf1 00:05:42.053 04:22:02 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.053 04:22:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.310 [2024-07-14 04:22:02.305479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.310 04:22:02 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.310 04:22:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.568 04:22:02 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.568 04:22:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.826 04:22:02 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:42.826 04:22:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:43.084 04:22:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.084 04:22:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.342 [2024-07-14 04:22:03.284682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:43.342 04:22:03 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:43.342 04:22:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.342 04:22:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.342 04:22:03 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:43.342 04:22:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.342 04:22:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.342 04:22:03 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:43.342 04:22:03 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.342 04:22:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.599 MallocBdevForConfigChangeCheck 00:05:43.599 04:22:03 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:43.599 04:22:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.599 04:22:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.599 04:22:03 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:43.599 04:22:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.857 04:22:03 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:43.857 INFO: shutting down applications... 00:05:43.857 04:22:03 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:43.857 04:22:03 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:43.857 04:22:03 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:43.857 04:22:03 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:45.759 Calling clear_iscsi_subsystem 00:05:45.759 Calling clear_nvmf_subsystem 00:05:45.759 Calling clear_nbd_subsystem 00:05:45.759 Calling clear_ublk_subsystem 00:05:45.759 Calling clear_vhost_blk_subsystem 00:05:45.759 Calling clear_vhost_scsi_subsystem 00:05:45.759 Calling clear_bdev_subsystem 00:05:45.759 04:22:05 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:45.759 04:22:05 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:45.759 04:22:05 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:45.759 04:22:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.759 04:22:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:45.759 04:22:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:46.018 04:22:05 json_config -- json_config/json_config.sh@345 -- # break 00:05:46.018 04:22:05 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:46.018 04:22:05 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:46.018 04:22:05 json_config -- json_config/common.sh@31 -- # local app=target 00:05:46.018 04:22:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.018 04:22:05 json_config -- json_config/common.sh@35 -- # [[ -n 2659748 ]] 00:05:46.018 04:22:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2659748 00:05:46.018 04:22:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.018 04:22:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.018 04:22:05 json_config -- json_config/common.sh@41 -- # kill -0 2659748 00:05:46.018 04:22:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.585 04:22:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.585 04:22:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.585 04:22:06 json_config -- json_config/common.sh@41 -- # kill -0 2659748 00:05:46.585 04:22:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:46.585 04:22:06 json_config -- json_config/common.sh@43 -- # break 00:05:46.585 04:22:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:46.585 04:22:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:46.585 SPDK target shutdown done 00:05:46.585 04:22:06 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:46.585 INFO: relaunching applications... 00:05:46.585 04:22:06 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.585 04:22:06 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.585 04:22:06 json_config -- json_config/common.sh@10 -- # shift 00:05:46.585 04:22:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.585 04:22:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.585 04:22:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.585 04:22:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.585 04:22:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.585 04:22:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2661057 00:05:46.585 04:22:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.585 04:22:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.585 Waiting for target to run... 00:05:46.585 04:22:06 json_config -- json_config/common.sh@25 -- # waitforlisten 2661057 /var/tmp/spdk_tgt.sock 00:05:46.585 04:22:06 json_config -- common/autotest_common.sh@827 -- # '[' -z 2661057 ']' 00:05:46.585 04:22:06 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.585 04:22:06 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.585 04:22:06 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.585 04:22:06 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.585 04:22:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.585 [2024-07-14 04:22:06.553615] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:46.586 [2024-07-14 04:22:06.553711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661057 ] 00:05:46.586 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.844 [2024-07-14 04:22:06.910224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.844 [2024-07-14 04:22:06.973737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.125 [2024-07-14 04:22:10.003010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.125 [2024-07-14 04:22:10.035449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:50.125 04:22:10 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.125 04:22:10 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:50.125 04:22:10 json_config -- json_config/common.sh@26 -- # echo '' 00:05:50.125 00:05:50.125 04:22:10 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:50.125 04:22:10 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:50.125 INFO: Checking if target configuration is the same... 00:05:50.125 04:22:10 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.125 04:22:10 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:50.125 04:22:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.125 + '[' 2 -ne 2 ']' 00:05:50.125 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:50.125 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:50.125 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:50.125 +++ basename /dev/fd/62 00:05:50.125 ++ mktemp /tmp/62.XXX 00:05:50.125 + tmp_file_1=/tmp/62.Dku 00:05:50.125 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.125 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.125 + tmp_file_2=/tmp/spdk_tgt_config.json.Ae4 00:05:50.125 + ret=0 00:05:50.125 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.383 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.383 + diff -u /tmp/62.Dku /tmp/spdk_tgt_config.json.Ae4 00:05:50.383 + echo 'INFO: JSON config files are the same' 00:05:50.383 INFO: JSON config files are the same 00:05:50.383 + rm /tmp/62.Dku /tmp/spdk_tgt_config.json.Ae4 00:05:50.383 + exit 0 00:05:50.383 04:22:10 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:50.383 04:22:10 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:50.383 INFO: changing configuration and checking if this can be detected... 00:05:50.383 04:22:10 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.383 04:22:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.641 04:22:10 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.641 04:22:10 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:50.641 04:22:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.641 + '[' 2 -ne 2 ']' 00:05:50.641 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:50.641 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:50.641 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:50.641 +++ basename /dev/fd/62 00:05:50.641 ++ mktemp /tmp/62.XXX 00:05:50.641 + tmp_file_1=/tmp/62.nsV 00:05:50.641 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.641 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.641 + tmp_file_2=/tmp/spdk_tgt_config.json.zYo 00:05:50.641 + ret=0 00:05:50.641 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.206 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.206 + diff -u /tmp/62.nsV /tmp/spdk_tgt_config.json.zYo 00:05:51.206 + ret=1 00:05:51.206 + echo '=== Start of file: /tmp/62.nsV ===' 00:05:51.206 + cat /tmp/62.nsV 00:05:51.206 + echo '=== End of file: /tmp/62.nsV ===' 00:05:51.206 + echo '' 00:05:51.206 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zYo ===' 00:05:51.206 + cat /tmp/spdk_tgt_config.json.zYo 00:05:51.206 + echo '=== End of file: /tmp/spdk_tgt_config.json.zYo ===' 00:05:51.206 + echo '' 00:05:51.206 + rm /tmp/62.nsV /tmp/spdk_tgt_config.json.zYo 00:05:51.206 + exit 1 00:05:51.206 04:22:11 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:51.206 INFO: configuration change detected. 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@317 -- # [[ -n 2661057 ]] 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.207 04:22:11 json_config -- json_config/json_config.sh@323 -- # killprocess 2661057 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@946 -- # '[' -z 2661057 ']' 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@950 -- # kill -0 2661057 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@951 -- # uname 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2661057 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2661057' 00:05:51.207 killing process with pid 2661057 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@965 -- # kill 2661057 00:05:51.207 04:22:11 json_config -- common/autotest_common.sh@970 -- # wait 2661057 00:05:53.108 04:22:12 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.108 04:22:12 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:53.108 04:22:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.108 04:22:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.108 04:22:12 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:53.108 04:22:12 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:53.108 INFO: Success 00:05:53.108 00:05:53.108 real 0m15.794s 00:05:53.108 user 0m17.517s 00:05:53.108 sys 0m2.010s 00:05:53.108 04:22:12 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.108 04:22:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.108 ************************************ 00:05:53.108 END TEST json_config 00:05:53.108 ************************************ 00:05:53.108 04:22:12 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.108 04:22:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.108 04:22:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.108 04:22:12 -- common/autotest_common.sh@10 -- # set +x 00:05:53.108 ************************************ 00:05:53.108 START TEST json_config_extra_key 00:05:53.108 ************************************ 00:05:53.108 04:22:12 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.108 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.108 04:22:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:53.109 04:22:12 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.109 04:22:12 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.109 04:22:12 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.109 04:22:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.109 04:22:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.109 04:22:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.109 04:22:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:53.109 04:22:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:53.109 04:22:12 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:53.109 INFO: launching applications... 00:05:53.109 04:22:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2661872 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:53.109 Waiting for target to run... 00:05:53.109 04:22:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2661872 /var/tmp/spdk_tgt.sock 00:05:53.109 04:22:12 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 2661872 ']' 00:05:53.109 04:22:12 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.109 04:22:12 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:53.109 04:22:12 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.109 04:22:12 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:53.109 04:22:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:53.109 [2024-07-14 04:22:12.995815] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:53.109 [2024-07-14 04:22:12.995944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661872 ] 00:05:53.109 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.366 [2024-07-14 04:22:13.490799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.623 [2024-07-14 04:22:13.573205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.880 04:22:13 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:53.880 04:22:13 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:53.881 04:22:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:53.881 00:05:53.881 04:22:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:53.881 INFO: shutting down applications... 00:05:53.881 04:22:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:53.881 04:22:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:53.881 04:22:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:53.881 04:22:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2661872 ]] 00:05:53.881 04:22:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2661872 00:05:53.881 04:22:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:53.881 04:22:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.881 04:22:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2661872 00:05:53.881 04:22:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.446 04:22:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.446 04:22:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.446 04:22:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2661872 00:05:54.446 04:22:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.446 04:22:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:54.446 04:22:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.446 04:22:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.446 SPDK target shutdown done 00:05:54.446 04:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:54.446 Success 00:05:54.446 00:05:54.446 real 0m1.590s 00:05:54.446 user 0m1.417s 00:05:54.446 sys 0m0.598s 00:05:54.446 04:22:14 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.446 04:22:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.446 ************************************ 00:05:54.446 END TEST json_config_extra_key 00:05:54.446 ************************************ 00:05:54.446 04:22:14 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.446 04:22:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.446 04:22:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.446 04:22:14 -- common/autotest_common.sh@10 -- # set +x 00:05:54.446 ************************************ 00:05:54.446 START TEST alias_rpc 00:05:54.446 ************************************ 00:05:54.446 04:22:14 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.446 * Looking for test storage... 00:05:54.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:54.446 04:22:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:54.446 04:22:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2662158 00:05:54.446 04:22:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.446 04:22:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2662158 00:05:54.446 04:22:14 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 2662158 ']' 00:05:54.446 04:22:14 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.446 04:22:14 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.446 04:22:14 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.446 04:22:14 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.446 04:22:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.446 [2024-07-14 04:22:14.619660] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:54.446 [2024-07-14 04:22:14.619745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662158 ] 00:05:54.704 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.704 [2024-07-14 04:22:14.681949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.704 [2024-07-14 04:22:14.766244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.961 04:22:15 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.962 04:22:15 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:54.962 04:22:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:55.219 04:22:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2662158 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 2662158 ']' 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 2662158 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2662158 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2662158' 00:05:55.219 killing process with pid 2662158 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@965 -- # kill 2662158 00:05:55.219 04:22:15 alias_rpc -- common/autotest_common.sh@970 -- # wait 2662158 00:05:55.785 00:05:55.785 real 0m1.198s 00:05:55.785 user 0m1.245s 00:05:55.785 sys 0m0.435s 00:05:55.785 04:22:15 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.785 04:22:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.785 ************************************ 00:05:55.785 END TEST alias_rpc 00:05:55.785 ************************************ 00:05:55.785 04:22:15 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:55.785 04:22:15 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:55.785 04:22:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.785 04:22:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.785 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:05:55.785 ************************************ 00:05:55.785 START TEST spdkcli_tcp 00:05:55.785 ************************************ 00:05:55.785 04:22:15 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:55.785 * Looking for test storage... 00:05:55.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:55.785 04:22:15 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:55.785 04:22:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2662350 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:55.785 04:22:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2662350 00:05:55.785 04:22:15 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 2662350 ']' 00:05:55.785 04:22:15 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.785 04:22:15 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.785 04:22:15 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.785 04:22:15 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.785 04:22:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.785 [2024-07-14 04:22:15.869575] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:55.785 [2024-07-14 04:22:15.869670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662350 ] 00:05:55.785 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.785 [2024-07-14 04:22:15.927572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.043 [2024-07-14 04:22:16.013208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.043 [2024-07-14 04:22:16.013212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.300 04:22:16 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.300 04:22:16 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:56.300 04:22:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2662354 00:05:56.300 04:22:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:56.300 04:22:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:56.558 [ 00:05:56.558 "bdev_malloc_delete", 00:05:56.558 "bdev_malloc_create", 00:05:56.558 "bdev_null_resize", 00:05:56.558 "bdev_null_delete", 00:05:56.558 "bdev_null_create", 00:05:56.558 "bdev_nvme_cuse_unregister", 00:05:56.558 "bdev_nvme_cuse_register", 00:05:56.558 "bdev_opal_new_user", 00:05:56.558 "bdev_opal_set_lock_state", 00:05:56.558 "bdev_opal_delete", 00:05:56.558 "bdev_opal_get_info", 00:05:56.558 "bdev_opal_create", 00:05:56.558 "bdev_nvme_opal_revert", 00:05:56.558 "bdev_nvme_opal_init", 00:05:56.558 "bdev_nvme_send_cmd", 00:05:56.558 "bdev_nvme_get_path_iostat", 00:05:56.558 "bdev_nvme_get_mdns_discovery_info", 00:05:56.558 "bdev_nvme_stop_mdns_discovery", 00:05:56.558 "bdev_nvme_start_mdns_discovery", 00:05:56.558 "bdev_nvme_set_multipath_policy", 00:05:56.558 "bdev_nvme_set_preferred_path", 00:05:56.558 "bdev_nvme_get_io_paths", 00:05:56.558 "bdev_nvme_remove_error_injection", 00:05:56.558 "bdev_nvme_add_error_injection", 00:05:56.558 "bdev_nvme_get_discovery_info", 00:05:56.558 "bdev_nvme_stop_discovery", 00:05:56.558 "bdev_nvme_start_discovery", 00:05:56.558 "bdev_nvme_get_controller_health_info", 00:05:56.558 "bdev_nvme_disable_controller", 00:05:56.558 "bdev_nvme_enable_controller", 00:05:56.558 "bdev_nvme_reset_controller", 00:05:56.558 "bdev_nvme_get_transport_statistics", 00:05:56.558 "bdev_nvme_apply_firmware", 00:05:56.558 "bdev_nvme_detach_controller", 00:05:56.558 "bdev_nvme_get_controllers", 00:05:56.558 "bdev_nvme_attach_controller", 00:05:56.558 "bdev_nvme_set_hotplug", 00:05:56.558 "bdev_nvme_set_options", 00:05:56.558 "bdev_passthru_delete", 00:05:56.558 "bdev_passthru_create", 00:05:56.558 "bdev_lvol_set_parent_bdev", 00:05:56.558 "bdev_lvol_set_parent", 00:05:56.558 "bdev_lvol_check_shallow_copy", 00:05:56.558 "bdev_lvol_start_shallow_copy", 00:05:56.558 "bdev_lvol_grow_lvstore", 00:05:56.558 "bdev_lvol_get_lvols", 00:05:56.558 "bdev_lvol_get_lvstores", 00:05:56.558 "bdev_lvol_delete", 00:05:56.558 "bdev_lvol_set_read_only", 00:05:56.558 "bdev_lvol_resize", 00:05:56.558 "bdev_lvol_decouple_parent", 00:05:56.558 "bdev_lvol_inflate", 00:05:56.558 "bdev_lvol_rename", 00:05:56.558 "bdev_lvol_clone_bdev", 00:05:56.558 "bdev_lvol_clone", 00:05:56.558 "bdev_lvol_snapshot", 00:05:56.558 "bdev_lvol_create", 00:05:56.558 "bdev_lvol_delete_lvstore", 00:05:56.558 "bdev_lvol_rename_lvstore", 00:05:56.558 "bdev_lvol_create_lvstore", 00:05:56.558 "bdev_raid_set_options", 00:05:56.558 "bdev_raid_remove_base_bdev", 00:05:56.558 "bdev_raid_add_base_bdev", 00:05:56.558 "bdev_raid_delete", 00:05:56.558 "bdev_raid_create", 00:05:56.558 "bdev_raid_get_bdevs", 00:05:56.558 "bdev_error_inject_error", 00:05:56.558 "bdev_error_delete", 00:05:56.558 "bdev_error_create", 00:05:56.558 "bdev_split_delete", 00:05:56.558 "bdev_split_create", 00:05:56.558 "bdev_delay_delete", 00:05:56.558 "bdev_delay_create", 00:05:56.558 "bdev_delay_update_latency", 00:05:56.558 "bdev_zone_block_delete", 00:05:56.558 "bdev_zone_block_create", 00:05:56.558 "blobfs_create", 00:05:56.558 "blobfs_detect", 00:05:56.558 "blobfs_set_cache_size", 00:05:56.558 "bdev_aio_delete", 00:05:56.558 "bdev_aio_rescan", 00:05:56.558 "bdev_aio_create", 00:05:56.558 "bdev_ftl_set_property", 00:05:56.558 "bdev_ftl_get_properties", 00:05:56.558 "bdev_ftl_get_stats", 00:05:56.558 "bdev_ftl_unmap", 00:05:56.558 "bdev_ftl_unload", 00:05:56.558 "bdev_ftl_delete", 00:05:56.558 "bdev_ftl_load", 00:05:56.558 "bdev_ftl_create", 00:05:56.558 "bdev_virtio_attach_controller", 00:05:56.558 "bdev_virtio_scsi_get_devices", 00:05:56.558 "bdev_virtio_detach_controller", 00:05:56.558 "bdev_virtio_blk_set_hotplug", 00:05:56.558 "bdev_iscsi_delete", 00:05:56.558 "bdev_iscsi_create", 00:05:56.558 "bdev_iscsi_set_options", 00:05:56.558 "accel_error_inject_error", 00:05:56.558 "ioat_scan_accel_module", 00:05:56.558 "dsa_scan_accel_module", 00:05:56.558 "iaa_scan_accel_module", 00:05:56.558 "vfu_virtio_create_scsi_endpoint", 00:05:56.558 "vfu_virtio_scsi_remove_target", 00:05:56.558 "vfu_virtio_scsi_add_target", 00:05:56.558 "vfu_virtio_create_blk_endpoint", 00:05:56.558 "vfu_virtio_delete_endpoint", 00:05:56.558 "keyring_file_remove_key", 00:05:56.558 "keyring_file_add_key", 00:05:56.558 "keyring_linux_set_options", 00:05:56.558 "iscsi_get_histogram", 00:05:56.558 "iscsi_enable_histogram", 00:05:56.558 "iscsi_set_options", 00:05:56.558 "iscsi_get_auth_groups", 00:05:56.558 "iscsi_auth_group_remove_secret", 00:05:56.558 "iscsi_auth_group_add_secret", 00:05:56.558 "iscsi_delete_auth_group", 00:05:56.558 "iscsi_create_auth_group", 00:05:56.558 "iscsi_set_discovery_auth", 00:05:56.558 "iscsi_get_options", 00:05:56.558 "iscsi_target_node_request_logout", 00:05:56.558 "iscsi_target_node_set_redirect", 00:05:56.558 "iscsi_target_node_set_auth", 00:05:56.558 "iscsi_target_node_add_lun", 00:05:56.558 "iscsi_get_stats", 00:05:56.558 "iscsi_get_connections", 00:05:56.558 "iscsi_portal_group_set_auth", 00:05:56.558 "iscsi_start_portal_group", 00:05:56.558 "iscsi_delete_portal_group", 00:05:56.558 "iscsi_create_portal_group", 00:05:56.558 "iscsi_get_portal_groups", 00:05:56.558 "iscsi_delete_target_node", 00:05:56.558 "iscsi_target_node_remove_pg_ig_maps", 00:05:56.558 "iscsi_target_node_add_pg_ig_maps", 00:05:56.558 "iscsi_create_target_node", 00:05:56.558 "iscsi_get_target_nodes", 00:05:56.558 "iscsi_delete_initiator_group", 00:05:56.558 "iscsi_initiator_group_remove_initiators", 00:05:56.558 "iscsi_initiator_group_add_initiators", 00:05:56.558 "iscsi_create_initiator_group", 00:05:56.558 "iscsi_get_initiator_groups", 00:05:56.558 "nvmf_set_crdt", 00:05:56.558 "nvmf_set_config", 00:05:56.558 "nvmf_set_max_subsystems", 00:05:56.558 "nvmf_stop_mdns_prr", 00:05:56.558 "nvmf_publish_mdns_prr", 00:05:56.558 "nvmf_subsystem_get_listeners", 00:05:56.558 "nvmf_subsystem_get_qpairs", 00:05:56.558 "nvmf_subsystem_get_controllers", 00:05:56.558 "nvmf_get_stats", 00:05:56.558 "nvmf_get_transports", 00:05:56.558 "nvmf_create_transport", 00:05:56.558 "nvmf_get_targets", 00:05:56.558 "nvmf_delete_target", 00:05:56.558 "nvmf_create_target", 00:05:56.558 "nvmf_subsystem_allow_any_host", 00:05:56.558 "nvmf_subsystem_remove_host", 00:05:56.558 "nvmf_subsystem_add_host", 00:05:56.558 "nvmf_ns_remove_host", 00:05:56.558 "nvmf_ns_add_host", 00:05:56.558 "nvmf_subsystem_remove_ns", 00:05:56.558 "nvmf_subsystem_add_ns", 00:05:56.558 "nvmf_subsystem_listener_set_ana_state", 00:05:56.558 "nvmf_discovery_get_referrals", 00:05:56.558 "nvmf_discovery_remove_referral", 00:05:56.558 "nvmf_discovery_add_referral", 00:05:56.558 "nvmf_subsystem_remove_listener", 00:05:56.558 "nvmf_subsystem_add_listener", 00:05:56.558 "nvmf_delete_subsystem", 00:05:56.558 "nvmf_create_subsystem", 00:05:56.558 "nvmf_get_subsystems", 00:05:56.558 "env_dpdk_get_mem_stats", 00:05:56.558 "nbd_get_disks", 00:05:56.558 "nbd_stop_disk", 00:05:56.558 "nbd_start_disk", 00:05:56.558 "ublk_recover_disk", 00:05:56.558 "ublk_get_disks", 00:05:56.558 "ublk_stop_disk", 00:05:56.558 "ublk_start_disk", 00:05:56.558 "ublk_destroy_target", 00:05:56.558 "ublk_create_target", 00:05:56.558 "virtio_blk_create_transport", 00:05:56.558 "virtio_blk_get_transports", 00:05:56.559 "vhost_controller_set_coalescing", 00:05:56.559 "vhost_get_controllers", 00:05:56.559 "vhost_delete_controller", 00:05:56.559 "vhost_create_blk_controller", 00:05:56.559 "vhost_scsi_controller_remove_target", 00:05:56.559 "vhost_scsi_controller_add_target", 00:05:56.559 "vhost_start_scsi_controller", 00:05:56.559 "vhost_create_scsi_controller", 00:05:56.559 "thread_set_cpumask", 00:05:56.559 "framework_get_scheduler", 00:05:56.559 "framework_set_scheduler", 00:05:56.559 "framework_get_reactors", 00:05:56.559 "thread_get_io_channels", 00:05:56.559 "thread_get_pollers", 00:05:56.559 "thread_get_stats", 00:05:56.559 "framework_monitor_context_switch", 00:05:56.559 "spdk_kill_instance", 00:05:56.559 "log_enable_timestamps", 00:05:56.559 "log_get_flags", 00:05:56.559 "log_clear_flag", 00:05:56.559 "log_set_flag", 00:05:56.559 "log_get_level", 00:05:56.559 "log_set_level", 00:05:56.559 "log_get_print_level", 00:05:56.559 "log_set_print_level", 00:05:56.559 "framework_enable_cpumask_locks", 00:05:56.559 "framework_disable_cpumask_locks", 00:05:56.559 "framework_wait_init", 00:05:56.559 "framework_start_init", 00:05:56.559 "scsi_get_devices", 00:05:56.559 "bdev_get_histogram", 00:05:56.559 "bdev_enable_histogram", 00:05:56.559 "bdev_set_qos_limit", 00:05:56.559 "bdev_set_qd_sampling_period", 00:05:56.559 "bdev_get_bdevs", 00:05:56.559 "bdev_reset_iostat", 00:05:56.559 "bdev_get_iostat", 00:05:56.559 "bdev_examine", 00:05:56.559 "bdev_wait_for_examine", 00:05:56.559 "bdev_set_options", 00:05:56.559 "notify_get_notifications", 00:05:56.559 "notify_get_types", 00:05:56.559 "accel_get_stats", 00:05:56.559 "accel_set_options", 00:05:56.559 "accel_set_driver", 00:05:56.559 "accel_crypto_key_destroy", 00:05:56.559 "accel_crypto_keys_get", 00:05:56.559 "accel_crypto_key_create", 00:05:56.559 "accel_assign_opc", 00:05:56.559 "accel_get_module_info", 00:05:56.559 "accel_get_opc_assignments", 00:05:56.559 "vmd_rescan", 00:05:56.559 "vmd_remove_device", 00:05:56.559 "vmd_enable", 00:05:56.559 "sock_get_default_impl", 00:05:56.559 "sock_set_default_impl", 00:05:56.559 "sock_impl_set_options", 00:05:56.559 "sock_impl_get_options", 00:05:56.559 "iobuf_get_stats", 00:05:56.559 "iobuf_set_options", 00:05:56.559 "keyring_get_keys", 00:05:56.559 "framework_get_pci_devices", 00:05:56.559 "framework_get_config", 00:05:56.559 "framework_get_subsystems", 00:05:56.559 "vfu_tgt_set_base_path", 00:05:56.559 "trace_get_info", 00:05:56.559 "trace_get_tpoint_group_mask", 00:05:56.559 "trace_disable_tpoint_group", 00:05:56.559 "trace_enable_tpoint_group", 00:05:56.559 "trace_clear_tpoint_mask", 00:05:56.559 "trace_set_tpoint_mask", 00:05:56.559 "spdk_get_version", 00:05:56.559 "rpc_get_methods" 00:05:56.559 ] 00:05:56.559 04:22:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.559 04:22:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:56.559 04:22:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2662350 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 2662350 ']' 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 2662350 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2662350 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2662350' 00:05:56.559 killing process with pid 2662350 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 2662350 00:05:56.559 04:22:16 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 2662350 00:05:56.817 00:05:56.817 real 0m1.206s 00:05:56.817 user 0m2.155s 00:05:56.817 sys 0m0.447s 00:05:56.817 04:22:16 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.817 04:22:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.817 ************************************ 00:05:56.817 END TEST spdkcli_tcp 00:05:56.817 ************************************ 00:05:56.817 04:22:16 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.817 04:22:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.817 04:22:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.817 04:22:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.076 ************************************ 00:05:57.076 START TEST dpdk_mem_utility 00:05:57.076 ************************************ 00:05:57.076 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.076 * Looking for test storage... 00:05:57.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:57.076 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:57.076 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2662552 00:05:57.076 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.076 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2662552 00:05:57.076 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 2662552 ']' 00:05:57.076 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.076 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.076 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.076 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.076 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.076 [2024-07-14 04:22:17.126981] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:57.076 [2024-07-14 04:22:17.127062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662552 ] 00:05:57.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.076 [2024-07-14 04:22:17.183089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.334 [2024-07-14 04:22:17.267599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.334 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.334 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:57.334 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:57.334 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:57.334 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.334 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.334 { 00:05:57.334 "filename": "/tmp/spdk_mem_dump.txt" 00:05:57.334 } 00:05:57.334 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.334 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:57.593 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:57.593 1 heaps totaling size 814.000000 MiB 00:05:57.593 size: 814.000000 MiB heap id: 0 00:05:57.593 end heaps---------- 00:05:57.593 8 mempools totaling size 598.116089 MiB 00:05:57.593 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:57.593 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:57.593 size: 84.521057 MiB name: bdev_io_2662552 00:05:57.593 size: 51.011292 MiB name: evtpool_2662552 00:05:57.593 size: 50.003479 MiB name: msgpool_2662552 00:05:57.593 size: 21.763794 MiB name: PDU_Pool 00:05:57.593 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:57.593 size: 0.026123 MiB name: Session_Pool 00:05:57.593 end mempools------- 00:05:57.593 6 memzones totaling size 4.142822 MiB 00:05:57.593 size: 1.000366 MiB name: RG_ring_0_2662552 00:05:57.593 size: 1.000366 MiB name: RG_ring_1_2662552 00:05:57.593 size: 1.000366 MiB name: RG_ring_4_2662552 00:05:57.593 size: 1.000366 MiB name: RG_ring_5_2662552 00:05:57.593 size: 0.125366 MiB name: RG_ring_2_2662552 00:05:57.593 size: 0.015991 MiB name: RG_ring_3_2662552 00:05:57.593 end memzones------- 00:05:57.593 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:57.593 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:57.593 list of free elements. size: 12.519348 MiB 00:05:57.593 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:57.593 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:57.593 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:57.593 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:57.593 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:57.593 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:57.593 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:57.593 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:57.593 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:57.593 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:57.593 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:57.593 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:57.593 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:57.593 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:57.593 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:57.593 list of standard malloc elements. size: 199.218079 MiB 00:05:57.593 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:57.593 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:57.593 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:57.593 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:57.593 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:57.593 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:57.593 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:57.593 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:57.593 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:57.593 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:57.593 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:57.593 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:57.593 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:57.593 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:57.593 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:57.593 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:57.593 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:57.593 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:57.593 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:57.594 list of memzone associated elements. size: 602.262573 MiB 00:05:57.594 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:57.594 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:57.594 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:57.594 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:57.594 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:57.594 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2662552_0 00:05:57.594 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:57.594 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2662552_0 00:05:57.594 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:57.594 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2662552_0 00:05:57.594 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:57.594 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:57.594 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:57.594 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:57.594 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:57.594 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2662552 00:05:57.594 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:57.594 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2662552 00:05:57.594 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:57.594 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2662552 00:05:57.594 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:57.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:57.594 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:57.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:57.594 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:57.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:57.594 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:57.594 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:57.594 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:57.594 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2662552 00:05:57.594 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:57.594 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2662552 00:05:57.594 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:57.594 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2662552 00:05:57.594 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:57.594 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2662552 00:05:57.594 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:57.594 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2662552 00:05:57.594 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:57.594 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:57.594 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:57.594 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:57.594 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:57.594 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:57.594 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:57.594 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2662552 00:05:57.594 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:57.594 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:57.594 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:57.594 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:57.594 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:57.594 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2662552 00:05:57.594 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:57.594 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:57.594 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:57.594 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2662552 00:05:57.594 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:57.594 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2662552 00:05:57.594 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:57.594 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:57.594 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:57.594 04:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2662552 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 2662552 ']' 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 2662552 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2662552 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2662552' 00:05:57.594 killing process with pid 2662552 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 2662552 00:05:57.594 04:22:17 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 2662552 00:05:58.159 00:05:58.159 real 0m1.045s 00:05:58.159 user 0m1.014s 00:05:58.159 sys 0m0.414s 00:05:58.159 04:22:18 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.159 04:22:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.159 ************************************ 00:05:58.159 END TEST dpdk_mem_utility 00:05:58.159 ************************************ 00:05:58.159 04:22:18 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:58.159 04:22:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.159 04:22:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.159 04:22:18 -- common/autotest_common.sh@10 -- # set +x 00:05:58.159 ************************************ 00:05:58.159 START TEST event 00:05:58.159 ************************************ 00:05:58.159 04:22:18 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:58.159 * Looking for test storage... 00:05:58.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:58.159 04:22:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:58.159 04:22:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:58.159 04:22:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.159 04:22:18 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:58.159 04:22:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.159 04:22:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.159 ************************************ 00:05:58.159 START TEST event_perf 00:05:58.159 ************************************ 00:05:58.159 04:22:18 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.159 Running I/O for 1 seconds...[2024-07-14 04:22:18.209187] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:58.159 [2024-07-14 04:22:18.209268] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662740 ] 00:05:58.159 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.159 [2024-07-14 04:22:18.271723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.417 [2024-07-14 04:22:18.364635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.417 [2024-07-14 04:22:18.364699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.417 [2024-07-14 04:22:18.364787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.417 [2024-07-14 04:22:18.364790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.352 Running I/O for 1 seconds... 00:05:59.352 lcore 0: 235693 00:05:59.352 lcore 1: 235691 00:05:59.352 lcore 2: 235691 00:05:59.352 lcore 3: 235691 00:05:59.352 done. 00:05:59.352 00:05:59.352 real 0m1.252s 00:05:59.352 user 0m4.164s 00:05:59.352 sys 0m0.084s 00:05:59.352 04:22:19 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.352 04:22:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.352 ************************************ 00:05:59.352 END TEST event_perf 00:05:59.352 ************************************ 00:05:59.352 04:22:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:59.352 04:22:19 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:59.352 04:22:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.352 04:22:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.352 ************************************ 00:05:59.352 START TEST event_reactor 00:05:59.352 ************************************ 00:05:59.352 04:22:19 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:59.352 [2024-07-14 04:22:19.509204] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:59.352 [2024-07-14 04:22:19.509286] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662898 ] 00:05:59.352 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.610 [2024-07-14 04:22:19.572090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.610 [2024-07-14 04:22:19.666691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.983 test_start 00:06:00.983 oneshot 00:06:00.983 tick 100 00:06:00.983 tick 100 00:06:00.983 tick 250 00:06:00.983 tick 100 00:06:00.983 tick 100 00:06:00.983 tick 100 00:06:00.983 tick 250 00:06:00.983 tick 500 00:06:00.983 tick 100 00:06:00.983 tick 100 00:06:00.983 tick 250 00:06:00.983 tick 100 00:06:00.983 tick 100 00:06:00.983 test_end 00:06:00.983 00:06:00.983 real 0m1.252s 00:06:00.983 user 0m1.161s 00:06:00.983 sys 0m0.086s 00:06:00.983 04:22:20 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.983 04:22:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:00.983 ************************************ 00:06:00.983 END TEST event_reactor 00:06:00.983 ************************************ 00:06:00.983 04:22:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.983 04:22:20 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:00.983 04:22:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.983 04:22:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.983 ************************************ 00:06:00.983 START TEST event_reactor_perf 00:06:00.983 ************************************ 00:06:00.983 04:22:20 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.983 [2024-07-14 04:22:20.809906] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:00.983 [2024-07-14 04:22:20.809994] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663056 ] 00:06:00.983 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.983 [2024-07-14 04:22:20.870209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.983 [2024-07-14 04:22:20.962053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.918 test_start 00:06:01.918 test_end 00:06:01.918 Performance: 352145 events per second 00:06:01.918 00:06:01.918 real 0m1.245s 00:06:01.918 user 0m1.166s 00:06:01.918 sys 0m0.073s 00:06:01.918 04:22:22 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.918 04:22:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.918 ************************************ 00:06:01.918 END TEST event_reactor_perf 00:06:01.918 ************************************ 00:06:01.918 04:22:22 event -- event/event.sh@49 -- # uname -s 00:06:01.918 04:22:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:01.918 04:22:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:01.918 04:22:22 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.918 04:22:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.918 04:22:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.918 ************************************ 00:06:01.918 START TEST event_scheduler 00:06:01.918 ************************************ 00:06:01.918 04:22:22 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:02.176 * Looking for test storage... 00:06:02.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:02.177 04:22:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:02.177 04:22:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2663349 00:06:02.177 04:22:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:02.177 04:22:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.177 04:22:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2663349 00:06:02.177 04:22:22 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 2663349 ']' 00:06:02.177 04:22:22 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.177 04:22:22 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.177 04:22:22 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.177 04:22:22 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.177 04:22:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.177 [2024-07-14 04:22:22.180200] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:02.177 [2024-07-14 04:22:22.180286] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663349 ] 00:06:02.177 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.177 [2024-07-14 04:22:22.237092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.177 [2024-07-14 04:22:22.328832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.177 [2024-07-14 04:22:22.328892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.177 [2024-07-14 04:22:22.328955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.177 [2024-07-14 04:22:22.328958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:02.436 04:22:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.436 POWER: Env isn't set yet! 00:06:02.436 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:02.436 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:02.436 POWER: Cannot get available frequencies of lcore 0 00:06:02.436 POWER: Attempting to initialise PSTAT power management... 00:06:02.436 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:02.436 POWER: Initialized successfully for lcore 0 power management 00:06:02.436 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:02.436 POWER: Initialized successfully for lcore 1 power management 00:06:02.436 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:02.436 POWER: Initialized successfully for lcore 2 power management 00:06:02.436 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:02.436 POWER: Initialized successfully for lcore 3 power management 00:06:02.436 [2024-07-14 04:22:22.422053] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:02.436 [2024-07-14 04:22:22.422071] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:02.436 [2024-07-14 04:22:22.422082] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.436 04:22:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.436 [2024-07-14 04:22:22.522495] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.436 04:22:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.436 ************************************ 00:06:02.436 START TEST scheduler_create_thread 00:06:02.436 ************************************ 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.436 2 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.436 3 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.436 4 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.436 5 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.436 6 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.436 7 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.436 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.694 8 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.694 9 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.694 10 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.694 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:02.695 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.695 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.695 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.695 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:02.695 04:22:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:02.695 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.695 04:22:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.628 04:22:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.628 04:22:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:03.628 04:22:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.628 04:22:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.000 04:22:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.000 04:22:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:05.000 04:22:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:05.000 04:22:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.000 04:22:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.933 04:22:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.933 00:06:05.933 real 0m3.379s 00:06:05.933 user 0m0.007s 00:06:05.933 sys 0m0.007s 00:06:05.933 04:22:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.933 04:22:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.933 ************************************ 00:06:05.933 END TEST scheduler_create_thread 00:06:05.933 ************************************ 00:06:05.933 04:22:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:05.933 04:22:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2663349 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 2663349 ']' 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 2663349 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2663349 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2663349' 00:06:05.933 killing process with pid 2663349 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 2663349 00:06:05.933 04:22:25 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 2663349 00:06:06.191 [2024-07-14 04:22:26.307448] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:06.449 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:06.449 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:06.449 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:06.449 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:06.449 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:06.449 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:06.449 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:06.449 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:06.449 00:06:06.449 real 0m4.490s 00:06:06.449 user 0m8.013s 00:06:06.449 sys 0m0.317s 00:06:06.449 04:22:26 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.449 04:22:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.449 ************************************ 00:06:06.449 END TEST event_scheduler 00:06:06.449 ************************************ 00:06:06.449 04:22:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:06.449 04:22:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:06.449 04:22:26 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.449 04:22:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.449 04:22:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.449 ************************************ 00:06:06.449 START TEST app_repeat 00:06:06.449 ************************************ 00:06:06.449 04:22:26 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:06.449 04:22:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.449 04:22:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.449 04:22:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:06.449 04:22:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.449 04:22:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:06.449 04:22:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:06.449 04:22:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:06.708 04:22:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2663879 00:06:06.708 04:22:26 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:06.708 04:22:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.708 04:22:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2663879' 00:06:06.708 Process app_repeat pid: 2663879 00:06:06.708 04:22:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.708 04:22:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:06.708 spdk_app_start Round 0 00:06:06.708 04:22:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2663879 /var/tmp/spdk-nbd.sock 00:06:06.708 04:22:26 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2663879 ']' 00:06:06.708 04:22:26 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.708 04:22:26 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.708 04:22:26 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.708 04:22:26 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.708 04:22:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.708 [2024-07-14 04:22:26.659918] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:06.708 [2024-07-14 04:22:26.659982] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663879 ] 00:06:06.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.708 [2024-07-14 04:22:26.722533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.708 [2024-07-14 04:22:26.815123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.708 [2024-07-14 04:22:26.815128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.967 04:22:26 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.967 04:22:26 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:06.967 04:22:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.225 Malloc0 00:06:07.225 04:22:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.483 Malloc1 00:06:07.483 04:22:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.483 04:22:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.741 /dev/nbd0 00:06:07.741 04:22:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.741 04:22:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.741 1+0 records in 00:06:07.741 1+0 records out 00:06:07.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182233 s, 22.5 MB/s 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:07.741 04:22:27 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:07.741 04:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.741 04:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.741 04:22:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.999 /dev/nbd1 00:06:07.999 04:22:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.999 04:22:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.999 1+0 records in 00:06:07.999 1+0 records out 00:06:07.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241524 s, 17.0 MB/s 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:07.999 04:22:27 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:07.999 04:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.999 04:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.999 04:22:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.999 04:22:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.999 04:22:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.257 { 00:06:08.257 "nbd_device": "/dev/nbd0", 00:06:08.257 "bdev_name": "Malloc0" 00:06:08.257 }, 00:06:08.257 { 00:06:08.257 "nbd_device": "/dev/nbd1", 00:06:08.257 "bdev_name": "Malloc1" 00:06:08.257 } 00:06:08.257 ]' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.257 { 00:06:08.257 "nbd_device": "/dev/nbd0", 00:06:08.257 "bdev_name": "Malloc0" 00:06:08.257 }, 00:06:08.257 { 00:06:08.257 "nbd_device": "/dev/nbd1", 00:06:08.257 "bdev_name": "Malloc1" 00:06:08.257 } 00:06:08.257 ]' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.257 /dev/nbd1' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.257 /dev/nbd1' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.257 256+0 records in 00:06:08.257 256+0 records out 00:06:08.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497484 s, 211 MB/s 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.257 256+0 records in 00:06:08.257 256+0 records out 00:06:08.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020618 s, 50.9 MB/s 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.257 256+0 records in 00:06:08.257 256+0 records out 00:06:08.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238342 s, 44.0 MB/s 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.257 04:22:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.515 04:22:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.772 04:22:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.030 04:22:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.030 04:22:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.288 04:22:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.545 [2024-07-14 04:22:29.681250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.802 [2024-07-14 04:22:29.772081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.802 [2024-07-14 04:22:29.772081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.802 [2024-07-14 04:22:29.833437] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.802 [2024-07-14 04:22:29.833517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.323 04:22:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.323 04:22:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:12.323 spdk_app_start Round 1 00:06:12.323 04:22:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2663879 /var/tmp/spdk-nbd.sock 00:06:12.323 04:22:32 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2663879 ']' 00:06:12.323 04:22:32 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.323 04:22:32 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.323 04:22:32 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.323 04:22:32 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.323 04:22:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.580 04:22:32 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.580 04:22:32 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:12.580 04:22:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.843 Malloc0 00:06:12.843 04:22:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.101 Malloc1 00:06:13.101 04:22:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.101 04:22:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.358 /dev/nbd0 00:06:13.358 04:22:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.358 04:22:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.358 1+0 records in 00:06:13.358 1+0 records out 00:06:13.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163008 s, 25.1 MB/s 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:13.358 04:22:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:13.358 04:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.358 04:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.358 04:22:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.615 /dev/nbd1 00:06:13.615 04:22:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.615 04:22:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.615 1+0 records in 00:06:13.615 1+0 records out 00:06:13.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181446 s, 22.6 MB/s 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:13.615 04:22:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.616 04:22:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:13.616 04:22:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:13.616 04:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.616 04:22:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.616 04:22:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.616 04:22:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.616 04:22:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.874 { 00:06:13.874 "nbd_device": "/dev/nbd0", 00:06:13.874 "bdev_name": "Malloc0" 00:06:13.874 }, 00:06:13.874 { 00:06:13.874 "nbd_device": "/dev/nbd1", 00:06:13.874 "bdev_name": "Malloc1" 00:06:13.874 } 00:06:13.874 ]' 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.874 { 00:06:13.874 "nbd_device": "/dev/nbd0", 00:06:13.874 "bdev_name": "Malloc0" 00:06:13.874 }, 00:06:13.874 { 00:06:13.874 "nbd_device": "/dev/nbd1", 00:06:13.874 "bdev_name": "Malloc1" 00:06:13.874 } 00:06:13.874 ]' 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.874 /dev/nbd1' 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.874 /dev/nbd1' 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.874 04:22:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.132 256+0 records in 00:06:14.132 256+0 records out 00:06:14.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497282 s, 211 MB/s 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.132 256+0 records in 00:06:14.132 256+0 records out 00:06:14.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02129 s, 49.3 MB/s 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.132 256+0 records in 00:06:14.132 256+0 records out 00:06:14.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242493 s, 43.2 MB/s 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.132 04:22:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.389 04:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.390 04:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.390 04:22:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.390 04:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.390 04:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.390 04:22:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.390 04:22:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.390 04:22:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.390 04:22:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.390 04:22:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.647 04:22:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.648 04:22:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.906 04:22:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.906 04:22:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.165 04:22:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.424 [2024-07-14 04:22:35.456621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.424 [2024-07-14 04:22:35.546079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.424 [2024-07-14 04:22:35.546086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.424 [2024-07-14 04:22:35.605569] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.424 [2024-07-14 04:22:35.605653] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.705 04:22:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.705 04:22:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:18.705 spdk_app_start Round 2 00:06:18.705 04:22:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2663879 /var/tmp/spdk-nbd.sock 00:06:18.705 04:22:38 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2663879 ']' 00:06:18.705 04:22:38 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.705 04:22:38 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.705 04:22:38 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.705 04:22:38 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.705 04:22:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.705 04:22:38 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.705 04:22:38 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:18.705 04:22:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.705 Malloc0 00:06:18.706 04:22:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.963 Malloc1 00:06:18.963 04:22:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.963 04:22:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.227 /dev/nbd0 00:06:19.227 04:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.227 04:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.227 1+0 records in 00:06:19.227 1+0 records out 00:06:19.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192797 s, 21.2 MB/s 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:19.227 04:22:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:19.227 04:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.227 04:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.227 04:22:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.489 /dev/nbd1 00:06:19.489 04:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.489 04:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.489 1+0 records in 00:06:19.489 1+0 records out 00:06:19.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201342 s, 20.3 MB/s 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:19.489 04:22:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:19.489 04:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.489 04:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.489 04:22:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.489 04:22:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.489 04:22:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.746 04:22:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.746 { 00:06:19.746 "nbd_device": "/dev/nbd0", 00:06:19.746 "bdev_name": "Malloc0" 00:06:19.746 }, 00:06:19.746 { 00:06:19.746 "nbd_device": "/dev/nbd1", 00:06:19.746 "bdev_name": "Malloc1" 00:06:19.746 } 00:06:19.746 ]' 00:06:19.746 04:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.746 { 00:06:19.746 "nbd_device": "/dev/nbd0", 00:06:19.746 "bdev_name": "Malloc0" 00:06:19.746 }, 00:06:19.746 { 00:06:19.747 "nbd_device": "/dev/nbd1", 00:06:19.747 "bdev_name": "Malloc1" 00:06:19.747 } 00:06:19.747 ]' 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.747 /dev/nbd1' 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.747 /dev/nbd1' 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.747 256+0 records in 00:06:19.747 256+0 records out 00:06:19.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509823 s, 206 MB/s 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.747 256+0 records in 00:06:19.747 256+0 records out 00:06:19.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210216 s, 49.9 MB/s 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.747 256+0 records in 00:06:19.747 256+0 records out 00:06:19.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239106 s, 43.9 MB/s 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.747 04:22:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.006 04:22:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.264 04:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.264 04:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.264 04:22:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.264 04:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.264 04:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.264 04:22:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.522 04:22:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.522 04:22:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.522 04:22:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.522 04:22:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.522 04:22:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.522 04:22:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.522 04:22:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.522 04:22:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.779 04:22:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.779 04:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.779 04:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.779 04:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.779 04:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.779 04:22:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.779 04:22:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.779 04:22:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.779 04:22:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.779 04:22:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.037 04:22:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.295 [2024-07-14 04:22:41.252729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.295 [2024-07-14 04:22:41.344528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.295 [2024-07-14 04:22:41.344535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.295 [2024-07-14 04:22:41.400206] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.295 [2024-07-14 04:22:41.400285] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.575 04:22:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2663879 /var/tmp/spdk-nbd.sock 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2663879 ']' 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:24.575 04:22:44 event.app_repeat -- event/event.sh@39 -- # killprocess 2663879 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 2663879 ']' 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 2663879 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2663879 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2663879' 00:06:24.575 killing process with pid 2663879 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@965 -- # kill 2663879 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@970 -- # wait 2663879 00:06:24.575 spdk_app_start is called in Round 0. 00:06:24.575 Shutdown signal received, stop current app iteration 00:06:24.575 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:24.575 spdk_app_start is called in Round 1. 00:06:24.575 Shutdown signal received, stop current app iteration 00:06:24.575 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:24.575 spdk_app_start is called in Round 2. 00:06:24.575 Shutdown signal received, stop current app iteration 00:06:24.575 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:24.575 spdk_app_start is called in Round 3. 00:06:24.575 Shutdown signal received, stop current app iteration 00:06:24.575 04:22:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.575 04:22:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:24.575 00:06:24.575 real 0m17.860s 00:06:24.575 user 0m38.865s 00:06:24.575 sys 0m3.202s 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.575 04:22:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.575 ************************************ 00:06:24.575 END TEST app_repeat 00:06:24.575 ************************************ 00:06:24.575 04:22:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.575 04:22:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.575 04:22:44 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.575 04:22:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.575 04:22:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.575 ************************************ 00:06:24.575 START TEST cpu_locks 00:06:24.575 ************************************ 00:06:24.575 04:22:44 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.575 * Looking for test storage... 00:06:24.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:24.575 04:22:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:24.575 04:22:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:24.575 04:22:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:24.575 04:22:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:24.575 04:22:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.575 04:22:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.575 04:22:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.575 ************************************ 00:06:24.575 START TEST default_locks 00:06:24.575 ************************************ 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2666182 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2666182 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2666182 ']' 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.575 04:22:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.575 [2024-07-14 04:22:44.667832] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:24.575 [2024-07-14 04:22:44.667955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666182 ] 00:06:24.575 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.575 [2024-07-14 04:22:44.728528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.834 [2024-07-14 04:22:44.814541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.092 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.092 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:25.092 04:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2666182 00:06:25.092 04:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2666182 00:06:25.092 04:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.351 lslocks: write error 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2666182 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 2666182 ']' 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 2666182 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2666182 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2666182' 00:06:25.351 killing process with pid 2666182 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 2666182 00:06:25.351 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 2666182 00:06:25.918 04:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2666182 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2666182 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2666182 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2666182 ']' 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2666182) - No such process 00:06:25.919 ERROR: process (pid: 2666182) is no longer running 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.919 00:06:25.919 real 0m1.199s 00:06:25.919 user 0m1.144s 00:06:25.919 sys 0m0.520s 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.919 04:22:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.919 ************************************ 00:06:25.919 END TEST default_locks 00:06:25.919 ************************************ 00:06:25.919 04:22:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:25.919 04:22:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.919 04:22:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.919 04:22:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.919 ************************************ 00:06:25.919 START TEST default_locks_via_rpc 00:06:25.919 ************************************ 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2666446 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2666446 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2666446 ']' 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.919 04:22:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.919 [2024-07-14 04:22:45.914779] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:25.919 [2024-07-14 04:22:45.914886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666446 ] 00:06:25.919 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.919 [2024-07-14 04:22:45.971530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.919 [2024-07-14 04:22:46.060285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2666446 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2666446 00:06:26.178 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2666446 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 2666446 ']' 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 2666446 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2666446 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2666446' 00:06:26.745 killing process with pid 2666446 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 2666446 00:06:26.745 04:22:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 2666446 00:06:27.004 00:06:27.004 real 0m1.230s 00:06:27.004 user 0m1.145s 00:06:27.004 sys 0m0.555s 00:06:27.004 04:22:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.004 04:22:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.004 ************************************ 00:06:27.004 END TEST default_locks_via_rpc 00:06:27.004 ************************************ 00:06:27.004 04:22:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:27.004 04:22:47 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.004 04:22:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.004 04:22:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.004 ************************************ 00:06:27.004 START TEST non_locking_app_on_locked_coremask 00:06:27.004 ************************************ 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2666615 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2666615 /var/tmp/spdk.sock 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2666615 ']' 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.004 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.004 [2024-07-14 04:22:47.191529] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:27.004 [2024-07-14 04:22:47.191627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666615 ] 00:06:27.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.266 [2024-07-14 04:22:47.250941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.266 [2024-07-14 04:22:47.339030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2666619 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2666619 /var/tmp/spdk2.sock 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2666619 ']' 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.524 04:22:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.524 [2024-07-14 04:22:47.644723] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:27.524 [2024-07-14 04:22:47.644795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666619 ] 00:06:27.524 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.782 [2024-07-14 04:22:47.735187] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.782 [2024-07-14 04:22:47.735221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.782 [2024-07-14 04:22:47.919488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.717 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.717 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:28.717 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2666615 00:06:28.717 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2666615 00:06:28.717 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.975 lslocks: write error 00:06:28.975 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2666615 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2666615 ']' 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2666615 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2666615 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2666615' 00:06:28.976 killing process with pid 2666615 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2666615 00:06:28.976 04:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2666615 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2666619 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2666619 ']' 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2666619 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2666619 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2666619' 00:06:29.911 killing process with pid 2666619 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2666619 00:06:29.911 04:22:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2666619 00:06:30.170 00:06:30.170 real 0m3.114s 00:06:30.170 user 0m3.230s 00:06:30.170 sys 0m1.031s 00:06:30.170 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.170 04:22:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.170 ************************************ 00:06:30.170 END TEST non_locking_app_on_locked_coremask 00:06:30.170 ************************************ 00:06:30.170 04:22:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:30.170 04:22:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.170 04:22:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.170 04:22:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.170 ************************************ 00:06:30.170 START TEST locking_app_on_unlocked_coremask 00:06:30.170 ************************************ 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2667002 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2667002 /var/tmp/spdk.sock 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2667002 ']' 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.170 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.170 [2024-07-14 04:22:50.348442] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:30.170 [2024-07-14 04:22:50.348531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667002 ] 00:06:30.429 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.429 [2024-07-14 04:22:50.410018] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.429 [2024-07-14 04:22:50.410059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.429 [2024-07-14 04:22:50.505429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2667053 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2667053 /var/tmp/spdk2.sock 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2667053 ']' 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.687 04:22:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.687 [2024-07-14 04:22:50.816074] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:30.687 [2024-07-14 04:22:50.816171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667053 ] 00:06:30.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.945 [2024-07-14 04:22:50.913947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.945 [2024-07-14 04:22:51.093793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.880 04:22:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.880 04:22:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:31.880 04:22:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2667053 00:06:31.880 04:22:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2667053 00:06:31.880 04:22:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.139 lslocks: write error 00:06:32.139 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2667002 00:06:32.139 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2667002 ']' 00:06:32.139 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2667002 00:06:32.139 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:32.139 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.139 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2667002 00:06:32.139 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.139 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.140 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2667002' 00:06:32.140 killing process with pid 2667002 00:06:32.140 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2667002 00:06:32.140 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2667002 00:06:33.075 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2667053 00:06:33.075 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2667053 ']' 00:06:33.075 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2667053 00:06:33.075 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:33.075 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.075 04:22:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2667053 00:06:33.075 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.075 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.075 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2667053' 00:06:33.075 killing process with pid 2667053 00:06:33.075 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2667053 00:06:33.075 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2667053 00:06:33.332 00:06:33.332 real 0m3.129s 00:06:33.332 user 0m3.284s 00:06:33.332 sys 0m1.016s 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.332 ************************************ 00:06:33.332 END TEST locking_app_on_unlocked_coremask 00:06:33.332 ************************************ 00:06:33.332 04:22:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:33.332 04:22:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.332 04:22:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.332 04:22:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.332 ************************************ 00:06:33.332 START TEST locking_app_on_locked_coremask 00:06:33.332 ************************************ 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2667361 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2667361 /var/tmp/spdk.sock 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2667361 ']' 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.332 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.589 [2024-07-14 04:22:53.528970] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:33.589 [2024-07-14 04:22:53.529060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667361 ] 00:06:33.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.589 [2024-07-14 04:22:53.595527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.589 [2024-07-14 04:22:53.691292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2667487 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2667487 /var/tmp/spdk2.sock 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2667487 /var/tmp/spdk2.sock 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2667487 /var/tmp/spdk2.sock 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2667487 ']' 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.847 04:22:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.847 [2024-07-14 04:22:53.993772] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:33.847 [2024-07-14 04:22:53.993879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667487 ] 00:06:33.847 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.104 [2024-07-14 04:22:54.091275] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2667361 has claimed it. 00:06:34.104 [2024-07-14 04:22:54.091349] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2667487) - No such process 00:06:34.671 ERROR: process (pid: 2667487) is no longer running 00:06:34.671 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.671 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:34.671 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:34.671 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.671 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.671 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.671 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2667361 00:06:34.671 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2667361 00:06:34.671 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.929 lslocks: write error 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2667361 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2667361 ']' 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2667361 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2667361 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2667361' 00:06:34.929 killing process with pid 2667361 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2667361 00:06:34.929 04:22:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2667361 00:06:35.496 00:06:35.496 real 0m1.911s 00:06:35.496 user 0m2.110s 00:06:35.496 sys 0m0.599s 00:06:35.496 04:22:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.496 04:22:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 ************************************ 00:06:35.496 END TEST locking_app_on_locked_coremask 00:06:35.496 ************************************ 00:06:35.496 04:22:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:35.496 04:22:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.496 04:22:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.496 04:22:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 ************************************ 00:06:35.496 START TEST locking_overlapped_coremask 00:06:35.496 ************************************ 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2667655 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2667655 /var/tmp/spdk.sock 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2667655 ']' 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.496 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 [2024-07-14 04:22:55.496921] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:35.496 [2024-07-14 04:22:55.497015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667655 ] 00:06:35.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.496 [2024-07-14 04:22:55.560511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.496 [2024-07-14 04:22:55.650393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.496 [2024-07-14 04:22:55.650448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.496 [2024-07-14 04:22:55.650466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2667700 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2667700 /var/tmp/spdk2.sock 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2667700 /var/tmp/spdk2.sock 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2667700 /var/tmp/spdk2.sock 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2667700 ']' 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.755 04:22:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.755 [2024-07-14 04:22:55.945592] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:35.755 [2024-07-14 04:22:55.945694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667700 ] 00:06:36.012 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.013 [2024-07-14 04:22:56.037675] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2667655 has claimed it. 00:06:36.013 [2024-07-14 04:22:56.037740] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2667700) - No such process 00:06:36.579 ERROR: process (pid: 2667700) is no longer running 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2667655 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 2667655 ']' 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 2667655 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2667655 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2667655' 00:06:36.579 killing process with pid 2667655 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 2667655 00:06:36.579 04:22:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 2667655 00:06:37.145 00:06:37.145 real 0m1.635s 00:06:37.145 user 0m4.424s 00:06:37.145 sys 0m0.448s 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.146 ************************************ 00:06:37.146 END TEST locking_overlapped_coremask 00:06:37.146 ************************************ 00:06:37.146 04:22:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:37.146 04:22:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.146 04:22:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.146 04:22:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.146 ************************************ 00:06:37.146 START TEST locking_overlapped_coremask_via_rpc 00:06:37.146 ************************************ 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2667954 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2667954 /var/tmp/spdk.sock 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2667954 ']' 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.146 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.146 [2024-07-14 04:22:57.168049] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:37.146 [2024-07-14 04:22:57.168127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667954 ] 00:06:37.146 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.146 [2024-07-14 04:22:57.224601] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.146 [2024-07-14 04:22:57.224641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.146 [2024-07-14 04:22:57.314523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.146 [2024-07-14 04:22:57.314588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.146 [2024-07-14 04:22:57.314591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2667960 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2667960 /var/tmp/spdk2.sock 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2667960 ']' 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.404 04:22:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.662 [2024-07-14 04:22:57.618308] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:37.662 [2024-07-14 04:22:57.618405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667960 ] 00:06:37.662 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.662 [2024-07-14 04:22:57.705155] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.662 [2024-07-14 04:22:57.705203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.920 [2024-07-14 04:22:57.881556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.920 [2024-07-14 04:22:57.884922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.920 [2024-07-14 04:22:57.884924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.485 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.485 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:38.485 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.486 [2024-07-14 04:22:58.544960] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2667954 has claimed it. 00:06:38.486 request: 00:06:38.486 { 00:06:38.486 "method": "framework_enable_cpumask_locks", 00:06:38.486 "req_id": 1 00:06:38.486 } 00:06:38.486 Got JSON-RPC error response 00:06:38.486 response: 00:06:38.486 { 00:06:38.486 "code": -32603, 00:06:38.486 "message": "Failed to claim CPU core: 2" 00:06:38.486 } 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2667954 /var/tmp/spdk.sock 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2667954 ']' 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.486 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.743 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.743 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:38.743 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2667960 /var/tmp/spdk2.sock 00:06:38.743 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2667960 ']' 00:06:38.743 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.743 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.743 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.743 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.743 04:22:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.002 04:22:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.002 04:22:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:39.002 04:22:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:39.002 04:22:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.002 04:22:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.002 04:22:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.002 00:06:39.002 real 0m1.908s 00:06:39.002 user 0m1.011s 00:06:39.002 sys 0m0.141s 00:06:39.002 04:22:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.002 04:22:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.002 ************************************ 00:06:39.002 END TEST locking_overlapped_coremask_via_rpc 00:06:39.002 ************************************ 00:06:39.002 04:22:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:39.002 04:22:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2667954 ]] 00:06:39.002 04:22:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2667954 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2667954 ']' 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2667954 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2667954 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2667954' 00:06:39.002 killing process with pid 2667954 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2667954 00:06:39.002 04:22:59 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2667954 00:06:39.568 04:22:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2667960 ]] 00:06:39.568 04:22:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2667960 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2667960 ']' 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2667960 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2667960 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2667960' 00:06:39.568 killing process with pid 2667960 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2667960 00:06:39.568 04:22:59 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2667960 00:06:39.825 04:22:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.825 04:22:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.825 04:22:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2667954 ]] 00:06:39.825 04:22:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2667954 00:06:39.825 04:22:59 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2667954 ']' 00:06:39.825 04:22:59 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2667954 00:06:39.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2667954) - No such process 00:06:39.826 04:22:59 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2667954 is not found' 00:06:39.826 Process with pid 2667954 is not found 00:06:39.826 04:22:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2667960 ]] 00:06:39.826 04:22:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2667960 00:06:39.826 04:22:59 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2667960 ']' 00:06:39.826 04:22:59 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2667960 00:06:39.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2667960) - No such process 00:06:39.826 04:22:59 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2667960 is not found' 00:06:39.826 Process with pid 2667960 is not found 00:06:39.826 04:22:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.826 00:06:39.826 real 0m15.366s 00:06:39.826 user 0m26.815s 00:06:39.826 sys 0m5.207s 00:06:39.826 04:22:59 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.826 04:22:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.826 ************************************ 00:06:39.826 END TEST cpu_locks 00:06:39.826 ************************************ 00:06:39.826 00:06:39.826 real 0m41.824s 00:06:39.826 user 1m20.319s 00:06:39.826 sys 0m9.219s 00:06:39.826 04:22:59 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.826 04:22:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.826 ************************************ 00:06:39.826 END TEST event 00:06:39.826 ************************************ 00:06:39.826 04:22:59 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:39.826 04:22:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.826 04:22:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.826 04:22:59 -- common/autotest_common.sh@10 -- # set +x 00:06:39.826 ************************************ 00:06:39.826 START TEST thread 00:06:39.826 ************************************ 00:06:39.826 04:22:59 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:40.082 * Looking for test storage... 00:06:40.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:40.082 04:23:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.082 04:23:00 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:40.082 04:23:00 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.082 04:23:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.083 ************************************ 00:06:40.083 START TEST thread_poller_perf 00:06:40.083 ************************************ 00:06:40.083 04:23:00 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.083 [2024-07-14 04:23:00.074725] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:40.083 [2024-07-14 04:23:00.074789] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668323 ] 00:06:40.083 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.083 [2024-07-14 04:23:00.134667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.083 [2024-07-14 04:23:00.222748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.083 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:41.453 ====================================== 00:06:41.453 busy:2712265948 (cyc) 00:06:41.453 total_run_count: 294000 00:06:41.453 tsc_hz: 2700000000 (cyc) 00:06:41.453 ====================================== 00:06:41.453 poller_cost: 9225 (cyc), 3416 (nsec) 00:06:41.453 00:06:41.453 real 0m1.252s 00:06:41.453 user 0m1.165s 00:06:41.453 sys 0m0.082s 00:06:41.453 04:23:01 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.453 04:23:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.453 ************************************ 00:06:41.453 END TEST thread_poller_perf 00:06:41.453 ************************************ 00:06:41.453 04:23:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.453 04:23:01 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:41.453 04:23:01 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.453 04:23:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.453 ************************************ 00:06:41.453 START TEST thread_poller_perf 00:06:41.453 ************************************ 00:06:41.453 04:23:01 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.453 [2024-07-14 04:23:01.377234] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:41.453 [2024-07-14 04:23:01.377306] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668481 ] 00:06:41.453 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.453 [2024-07-14 04:23:01.440438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.453 [2024-07-14 04:23:01.533777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.453 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:42.859 ====================================== 00:06:42.859 busy:2702286117 (cyc) 00:06:42.859 total_run_count: 3852000 00:06:42.859 tsc_hz: 2700000000 (cyc) 00:06:42.859 ====================================== 00:06:42.859 poller_cost: 701 (cyc), 259 (nsec) 00:06:42.859 00:06:42.859 real 0m1.255s 00:06:42.859 user 0m1.168s 00:06:42.859 sys 0m0.081s 00:06:42.859 04:23:02 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.859 04:23:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.859 ************************************ 00:06:42.859 END TEST thread_poller_perf 00:06:42.859 ************************************ 00:06:42.859 04:23:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.859 00:06:42.859 real 0m2.657s 00:06:42.859 user 0m2.392s 00:06:42.859 sys 0m0.265s 00:06:42.859 04:23:02 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.859 04:23:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.859 ************************************ 00:06:42.859 END TEST thread 00:06:42.859 ************************************ 00:06:42.859 04:23:02 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:42.859 04:23:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.859 04:23:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.859 04:23:02 -- common/autotest_common.sh@10 -- # set +x 00:06:42.859 ************************************ 00:06:42.859 START TEST accel 00:06:42.860 ************************************ 00:06:42.860 04:23:02 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:42.860 * Looking for test storage... 00:06:42.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:42.860 04:23:02 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:42.860 04:23:02 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:42.860 04:23:02 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.860 04:23:02 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2668799 00:06:42.860 04:23:02 accel -- accel/accel.sh@63 -- # waitforlisten 2668799 00:06:42.860 04:23:02 accel -- common/autotest_common.sh@827 -- # '[' -z 2668799 ']' 00:06:42.860 04:23:02 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:42.860 04:23:02 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.860 04:23:02 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.860 04:23:02 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:42.860 04:23:02 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.860 04:23:02 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.860 04:23:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.860 04:23:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.860 04:23:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.860 04:23:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.860 04:23:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.860 04:23:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.860 04:23:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:42.860 04:23:02 accel -- accel/accel.sh@41 -- # jq -r . 00:06:42.860 [2024-07-14 04:23:02.787004] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:42.860 [2024-07-14 04:23:02.787091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668799 ] 00:06:42.860 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.860 [2024-07-14 04:23:02.845631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.860 [2024-07-14 04:23:02.934810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@860 -- # return 0 00:06:43.130 04:23:03 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:43.130 04:23:03 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:43.130 04:23:03 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:43.130 04:23:03 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:43.130 04:23:03 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:43.130 04:23:03 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.130 04:23:03 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.130 04:23:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.130 04:23:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.130 04:23:03 accel -- accel/accel.sh@75 -- # killprocess 2668799 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@946 -- # '[' -z 2668799 ']' 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@950 -- # kill -0 2668799 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@951 -- # uname 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2668799 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2668799' 00:06:43.130 killing process with pid 2668799 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@965 -- # kill 2668799 00:06:43.130 04:23:03 accel -- common/autotest_common.sh@970 -- # wait 2668799 00:06:43.698 04:23:03 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:43.698 04:23:03 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:43.698 04:23:03 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:43.698 04:23:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.698 04:23:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.698 04:23:03 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:43.698 04:23:03 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:43.698 04:23:03 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:43.698 04:23:03 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.698 04:23:03 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.698 04:23:03 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.698 04:23:03 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.698 04:23:03 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.698 04:23:03 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:43.698 04:23:03 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:43.698 04:23:03 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.698 04:23:03 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:43.698 04:23:03 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:43.698 04:23:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:43.698 04:23:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.698 04:23:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.698 ************************************ 00:06:43.698 START TEST accel_missing_filename 00:06:43.698 ************************************ 00:06:43.698 04:23:03 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:43.698 04:23:03 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:43.698 04:23:03 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:43.698 04:23:03 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:43.698 04:23:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.698 04:23:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:43.698 04:23:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.698 04:23:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:43.698 04:23:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:43.698 04:23:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:43.698 04:23:03 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.698 04:23:03 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.698 04:23:03 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.698 04:23:03 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.698 04:23:03 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.698 04:23:03 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:43.698 04:23:03 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:43.698 [2024-07-14 04:23:03.806197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:43.698 [2024-07-14 04:23:03.806260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668889 ] 00:06:43.698 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.698 [2024-07-14 04:23:03.872732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.956 [2024-07-14 04:23:03.966005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.956 [2024-07-14 04:23:04.024844] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.956 [2024-07-14 04:23:04.111040] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:44.213 A filename is required. 00:06:44.213 04:23:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:44.213 04:23:04 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.213 04:23:04 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:44.213 04:23:04 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:44.213 04:23:04 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:44.213 04:23:04 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.213 00:06:44.213 real 0m0.408s 00:06:44.213 user 0m0.292s 00:06:44.213 sys 0m0.148s 00:06:44.213 04:23:04 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.213 04:23:04 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:44.213 ************************************ 00:06:44.213 END TEST accel_missing_filename 00:06:44.213 ************************************ 00:06:44.213 04:23:04 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.213 04:23:04 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:44.213 04:23:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.213 04:23:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.213 ************************************ 00:06:44.213 START TEST accel_compress_verify 00:06:44.213 ************************************ 00:06:44.213 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.213 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:44.213 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.213 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:44.213 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.213 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:44.213 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.213 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.213 04:23:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.213 04:23:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:44.213 04:23:04 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.213 04:23:04 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.213 04:23:04 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.213 04:23:04 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.213 04:23:04 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.213 04:23:04 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:44.213 04:23:04 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:44.213 [2024-07-14 04:23:04.261947] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:44.213 [2024-07-14 04:23:04.262010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668996 ] 00:06:44.213 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.213 [2024-07-14 04:23:04.325798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.471 [2024-07-14 04:23:04.416679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.471 [2024-07-14 04:23:04.478518] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.471 [2024-07-14 04:23:04.563391] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:44.471 00:06:44.471 Compression does not support the verify option, aborting. 00:06:44.471 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:44.471 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.471 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:44.471 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:44.471 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:44.471 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.471 00:06:44.471 real 0m0.404s 00:06:44.471 user 0m0.294s 00:06:44.471 sys 0m0.144s 00:06:44.471 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.471 04:23:04 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:44.471 ************************************ 00:06:44.471 END TEST accel_compress_verify 00:06:44.471 ************************************ 00:06:44.729 04:23:04 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:44.729 04:23:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:44.729 04:23:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.729 04:23:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.729 ************************************ 00:06:44.729 START TEST accel_wrong_workload 00:06:44.729 ************************************ 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:44.729 04:23:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:44.729 04:23:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:44.729 04:23:04 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.729 04:23:04 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.729 04:23:04 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.729 04:23:04 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.729 04:23:04 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.729 04:23:04 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:44.729 04:23:04 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:44.729 Unsupported workload type: foobar 00:06:44.729 [2024-07-14 04:23:04.714215] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:44.729 accel_perf options: 00:06:44.729 [-h help message] 00:06:44.729 [-q queue depth per core] 00:06:44.729 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:44.729 [-T number of threads per core 00:06:44.729 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:44.729 [-t time in seconds] 00:06:44.729 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:44.729 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:44.729 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:44.729 [-l for compress/decompress workloads, name of uncompressed input file 00:06:44.729 [-S for crc32c workload, use this seed value (default 0) 00:06:44.729 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:44.729 [-f for fill workload, use this BYTE value (default 255) 00:06:44.729 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:44.729 [-y verify result if this switch is on] 00:06:44.729 [-a tasks to allocate per core (default: same value as -q)] 00:06:44.729 Can be used to spread operations across a wider range of memory. 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.729 00:06:44.729 real 0m0.022s 00:06:44.729 user 0m0.015s 00:06:44.729 sys 0m0.007s 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.729 04:23:04 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:44.729 ************************************ 00:06:44.729 END TEST accel_wrong_workload 00:06:44.729 ************************************ 00:06:44.729 Error: writing output failed: Broken pipe 00:06:44.729 04:23:04 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:44.729 04:23:04 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:44.729 04:23:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.729 04:23:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.729 ************************************ 00:06:44.729 START TEST accel_negative_buffers 00:06:44.729 ************************************ 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:44.729 04:23:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:44.729 04:23:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:44.729 04:23:04 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.729 04:23:04 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.729 04:23:04 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.729 04:23:04 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.729 04:23:04 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.729 04:23:04 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:44.729 04:23:04 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:44.729 -x option must be non-negative. 00:06:44.729 [2024-07-14 04:23:04.781449] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:44.729 accel_perf options: 00:06:44.729 [-h help message] 00:06:44.729 [-q queue depth per core] 00:06:44.729 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:44.729 [-T number of threads per core 00:06:44.729 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:44.729 [-t time in seconds] 00:06:44.729 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:44.729 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:44.729 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:44.729 [-l for compress/decompress workloads, name of uncompressed input file 00:06:44.729 [-S for crc32c workload, use this seed value (default 0) 00:06:44.729 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:44.729 [-f for fill workload, use this BYTE value (default 255) 00:06:44.729 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:44.729 [-y verify result if this switch is on] 00:06:44.729 [-a tasks to allocate per core (default: same value as -q)] 00:06:44.729 Can be used to spread operations across a wider range of memory. 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.729 00:06:44.729 real 0m0.023s 00:06:44.729 user 0m0.011s 00:06:44.729 sys 0m0.012s 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.729 04:23:04 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:44.729 ************************************ 00:06:44.729 END TEST accel_negative_buffers 00:06:44.729 ************************************ 00:06:44.729 Error: writing output failed: Broken pipe 00:06:44.729 04:23:04 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:44.729 04:23:04 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:44.729 04:23:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.729 04:23:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.729 ************************************ 00:06:44.729 START TEST accel_crc32c 00:06:44.729 ************************************ 00:06:44.729 04:23:04 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:44.729 04:23:04 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:44.729 [2024-07-14 04:23:04.846771] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:44.729 [2024-07-14 04:23:04.846836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669057 ] 00:06:44.729 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.729 [2024-07-14 04:23:04.908660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.987 [2024-07-14 04:23:05.006918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.987 04:23:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:46.357 04:23:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.357 00:06:46.357 real 0m1.394s 00:06:46.357 user 0m1.253s 00:06:46.357 sys 0m0.144s 00:06:46.357 04:23:06 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.357 04:23:06 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:46.357 ************************************ 00:06:46.357 END TEST accel_crc32c 00:06:46.357 ************************************ 00:06:46.357 04:23:06 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:46.357 04:23:06 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:46.357 04:23:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.357 04:23:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.357 ************************************ 00:06:46.357 START TEST accel_crc32c_C2 00:06:46.357 ************************************ 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:46.357 [2024-07-14 04:23:06.286678] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:46.357 [2024-07-14 04:23:06.286743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669339 ] 00:06:46.357 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.357 [2024-07-14 04:23:06.349306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.357 [2024-07-14 04:23:06.442509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.357 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.358 04:23:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.732 00:06:47.732 real 0m1.409s 00:06:47.732 user 0m1.270s 00:06:47.732 sys 0m0.142s 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.732 04:23:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:47.732 ************************************ 00:06:47.732 END TEST accel_crc32c_C2 00:06:47.732 ************************************ 00:06:47.732 04:23:07 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:47.732 04:23:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:47.732 04:23:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.732 04:23:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.732 ************************************ 00:06:47.732 START TEST accel_copy 00:06:47.732 ************************************ 00:06:47.732 04:23:07 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:47.732 04:23:07 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:47.732 [2024-07-14 04:23:07.742052] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:47.732 [2024-07-14 04:23:07.742108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669492 ] 00:06:47.732 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.732 [2024-07-14 04:23:07.802519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.732 [2024-07-14 04:23:07.895431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.991 04:23:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:49.369 04:23:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.369 00:06:49.369 real 0m1.407s 00:06:49.369 user 0m1.267s 00:06:49.369 sys 0m0.142s 00:06:49.369 04:23:09 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.369 04:23:09 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:49.369 ************************************ 00:06:49.369 END TEST accel_copy 00:06:49.369 ************************************ 00:06:49.369 04:23:09 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:49.369 04:23:09 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:49.369 04:23:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.369 04:23:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.369 ************************************ 00:06:49.369 START TEST accel_fill 00:06:49.369 ************************************ 00:06:49.369 04:23:09 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.369 04:23:09 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:49.370 [2024-07-14 04:23:09.196816] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:49.370 [2024-07-14 04:23:09.196891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669643 ] 00:06:49.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.370 [2024-07-14 04:23:09.259356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.370 [2024-07-14 04:23:09.352881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.370 04:23:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:50.744 04:23:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.744 00:06:50.744 real 0m1.405s 00:06:50.744 user 0m1.268s 00:06:50.744 sys 0m0.139s 00:06:50.744 04:23:10 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.744 04:23:10 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:50.744 ************************************ 00:06:50.744 END TEST accel_fill 00:06:50.744 ************************************ 00:06:50.744 04:23:10 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:50.744 04:23:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:50.744 04:23:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.744 04:23:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.744 ************************************ 00:06:50.744 START TEST accel_copy_crc32c 00:06:50.744 ************************************ 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:50.744 [2024-07-14 04:23:10.651055] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.744 [2024-07-14 04:23:10.651112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669893 ] 00:06:50.744 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.744 [2024-07-14 04:23:10.711638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.744 [2024-07-14 04:23:10.800295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.744 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.745 04:23:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.120 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.121 00:06:52.121 real 0m1.388s 00:06:52.121 user 0m1.252s 00:06:52.121 sys 0m0.138s 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.121 04:23:12 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:52.121 ************************************ 00:06:52.121 END TEST accel_copy_crc32c 00:06:52.121 ************************************ 00:06:52.121 04:23:12 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:52.121 04:23:12 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:52.121 04:23:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.121 04:23:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.121 ************************************ 00:06:52.121 START TEST accel_copy_crc32c_C2 00:06:52.121 ************************************ 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:52.121 [2024-07-14 04:23:12.087259] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:52.121 [2024-07-14 04:23:12.087323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670078 ] 00:06:52.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.121 [2024-07-14 04:23:12.149717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.121 [2024-07-14 04:23:12.242643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.121 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.122 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.122 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.122 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.380 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:52.380 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.380 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.380 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.380 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.381 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.381 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.381 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.381 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.381 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.381 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.381 04:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.315 00:06:53.315 real 0m1.411s 00:06:53.315 user 0m1.266s 00:06:53.315 sys 0m0.148s 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.315 04:23:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:53.315 ************************************ 00:06:53.315 END TEST accel_copy_crc32c_C2 00:06:53.315 ************************************ 00:06:53.315 04:23:13 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:53.315 04:23:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:53.315 04:23:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.315 04:23:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.573 ************************************ 00:06:53.573 START TEST accel_dualcast 00:06:53.573 ************************************ 00:06:53.573 04:23:13 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:53.573 [2024-07-14 04:23:13.543135] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:53.573 [2024-07-14 04:23:13.543227] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670237 ] 00:06:53.573 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.573 [2024-07-14 04:23:13.604190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.573 [2024-07-14 04:23:13.697150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.573 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.831 04:23:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:54.765 04:23:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.765 00:06:54.765 real 0m1.410s 00:06:54.765 user 0m1.264s 00:06:54.765 sys 0m0.149s 00:06:54.765 04:23:14 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.765 04:23:14 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:54.765 ************************************ 00:06:54.765 END TEST accel_dualcast 00:06:54.765 ************************************ 00:06:55.024 04:23:14 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:55.024 04:23:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:55.024 04:23:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.024 04:23:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.024 ************************************ 00:06:55.024 START TEST accel_compare 00:06:55.024 ************************************ 00:06:55.024 04:23:14 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:55.024 04:23:14 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:55.024 [2024-07-14 04:23:14.999173] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.024 [2024-07-14 04:23:14.999238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670388 ] 00:06:55.024 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.024 [2024-07-14 04:23:15.062314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.024 [2024-07-14 04:23:15.155092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.024 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:55.024 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.024 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.024 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.282 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:55.283 04:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.217 04:23:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:56.218 04:23:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.218 00:06:56.218 real 0m1.398s 00:06:56.218 user 0m1.258s 00:06:56.218 sys 0m0.142s 00:06:56.218 04:23:16 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.218 04:23:16 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:56.218 ************************************ 00:06:56.218 END TEST accel_compare 00:06:56.218 ************************************ 00:06:56.218 04:23:16 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:56.218 04:23:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.218 04:23:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.218 04:23:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.482 ************************************ 00:06:56.482 START TEST accel_xor 00:06:56.482 ************************************ 00:06:56.482 04:23:16 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:56.482 [2024-07-14 04:23:16.442502] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:56.482 [2024-07-14 04:23:16.442569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670666 ] 00:06:56.482 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.482 [2024-07-14 04:23:16.501557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.482 [2024-07-14 04:23:16.594546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.482 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.483 04:23:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.903 00:06:57.903 real 0m1.405s 00:06:57.903 user 0m1.258s 00:06:57.903 sys 0m0.149s 00:06:57.903 04:23:17 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.903 04:23:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:57.903 ************************************ 00:06:57.903 END TEST accel_xor 00:06:57.903 ************************************ 00:06:57.903 04:23:17 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:57.903 04:23:17 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:57.903 04:23:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.903 04:23:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.903 ************************************ 00:06:57.903 START TEST accel_xor 00:06:57.903 ************************************ 00:06:57.903 04:23:17 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:57.903 04:23:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:57.903 [2024-07-14 04:23:17.889529] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:57.903 [2024-07-14 04:23:17.889582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670826 ] 00:06:57.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.903 [2024-07-14 04:23:17.949901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.903 [2024-07-14 04:23:18.042681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.162 04:23:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.096 04:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.096 04:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:59.097 04:23:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.097 00:06:59.097 real 0m1.405s 00:06:59.097 user 0m1.264s 00:06:59.097 sys 0m0.143s 00:06:59.097 04:23:19 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.097 04:23:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:59.097 ************************************ 00:06:59.097 END TEST accel_xor 00:06:59.097 ************************************ 00:06:59.356 04:23:19 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:59.356 04:23:19 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:59.356 04:23:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.356 04:23:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.356 ************************************ 00:06:59.356 START TEST accel_dif_verify 00:06:59.356 ************************************ 00:06:59.356 04:23:19 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:59.356 04:23:19 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:59.356 [2024-07-14 04:23:19.347038] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:59.356 [2024-07-14 04:23:19.347100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670982 ] 00:06:59.356 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.356 [2024-07-14 04:23:19.410172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.356 [2024-07-14 04:23:19.503139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.615 04:23:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.616 04:23:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.616 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.616 04:23:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.550 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:00.809 04:23:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.809 00:07:00.809 real 0m1.415s 00:07:00.809 user 0m1.271s 00:07:00.810 sys 0m0.149s 00:07:00.810 04:23:20 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.810 04:23:20 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:00.810 ************************************ 00:07:00.810 END TEST accel_dif_verify 00:07:00.810 ************************************ 00:07:00.810 04:23:20 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:00.810 04:23:20 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:00.810 04:23:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.810 04:23:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.810 ************************************ 00:07:00.810 START TEST accel_dif_generate 00:07:00.810 ************************************ 00:07:00.810 04:23:20 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:00.810 04:23:20 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:00.810 [2024-07-14 04:23:20.811517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:00.810 [2024-07-14 04:23:20.811582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671144 ] 00:07:00.810 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.810 [2024-07-14 04:23:20.873595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.810 [2024-07-14 04:23:20.965637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.069 04:23:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:02.003 04:23:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.003 00:07:02.003 real 0m1.386s 00:07:02.003 user 0m1.245s 00:07:02.003 sys 0m0.145s 00:07:02.003 04:23:22 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.003 04:23:22 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:02.003 ************************************ 00:07:02.003 END TEST accel_dif_generate 00:07:02.004 ************************************ 00:07:02.263 04:23:22 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:02.263 04:23:22 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:02.263 04:23:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.263 04:23:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.263 ************************************ 00:07:02.263 START TEST accel_dif_generate_copy 00:07:02.263 ************************************ 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:02.263 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:02.263 [2024-07-14 04:23:22.245149] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:02.263 [2024-07-14 04:23:22.245211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671411 ] 00:07:02.263 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.263 [2024-07-14 04:23:22.307960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.263 [2024-07-14 04:23:22.400303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.521 04:23:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.454 00:07:03.454 real 0m1.414s 00:07:03.454 user 0m1.269s 00:07:03.454 sys 0m0.148s 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.454 04:23:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.454 ************************************ 00:07:03.454 END TEST accel_dif_generate_copy 00:07:03.454 ************************************ 00:07:03.711 04:23:23 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:03.711 04:23:23 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.711 04:23:23 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:03.711 04:23:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.711 04:23:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.711 ************************************ 00:07:03.711 START TEST accel_comp 00:07:03.711 ************************************ 00:07:03.711 04:23:23 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:03.711 04:23:23 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:03.711 [2024-07-14 04:23:23.707566] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:03.711 [2024-07-14 04:23:23.707630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671571 ] 00:07:03.711 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.711 [2024-07-14 04:23:23.771104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.711 [2024-07-14 04:23:23.862534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.968 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.969 04:23:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:05.339 04:23:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.339 00:07:05.339 real 0m1.418s 00:07:05.339 user 0m1.280s 00:07:05.339 sys 0m0.142s 00:07:05.339 04:23:25 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.339 04:23:25 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:05.339 ************************************ 00:07:05.339 END TEST accel_comp 00:07:05.339 ************************************ 00:07:05.339 04:23:25 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:05.339 04:23:25 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:05.339 04:23:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.339 04:23:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.339 ************************************ 00:07:05.339 START TEST accel_decomp 00:07:05.339 ************************************ 00:07:05.339 04:23:25 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:05.339 04:23:25 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:05.339 [2024-07-14 04:23:25.176296] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:05.339 [2024-07-14 04:23:25.176360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671722 ] 00:07:05.340 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.340 [2024-07-14 04:23:25.239081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.340 [2024-07-14 04:23:25.329615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.340 04:23:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.713 04:23:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.713 00:07:06.713 real 0m1.408s 00:07:06.713 user 0m1.267s 00:07:06.713 sys 0m0.144s 00:07:06.713 04:23:26 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.713 04:23:26 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:06.713 ************************************ 00:07:06.713 END TEST accel_decomp 00:07:06.713 ************************************ 00:07:06.713 04:23:26 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:06.713 04:23:26 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:06.713 04:23:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.713 04:23:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.713 ************************************ 00:07:06.713 START TEST accel_decmop_full 00:07:06.713 ************************************ 00:07:06.713 04:23:26 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:06.713 [2024-07-14 04:23:26.631985] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:06.713 [2024-07-14 04:23:26.632054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671992 ] 00:07:06.713 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.713 [2024-07-14 04:23:26.693615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.713 [2024-07-14 04:23:26.787779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.713 04:23:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.086 04:23:28 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.086 00:07:08.086 real 0m1.405s 00:07:08.086 user 0m1.259s 00:07:08.086 sys 0m0.147s 00:07:08.086 04:23:28 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.086 04:23:28 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:08.086 ************************************ 00:07:08.086 END TEST accel_decmop_full 00:07:08.086 ************************************ 00:07:08.086 04:23:28 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:08.086 04:23:28 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:08.086 04:23:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.086 04:23:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.086 ************************************ 00:07:08.086 START TEST accel_decomp_mcore 00:07:08.086 ************************************ 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:08.086 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:08.086 [2024-07-14 04:23:28.082063] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:08.086 [2024-07-14 04:23:28.082121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672157 ] 00:07:08.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.086 [2024-07-14 04:23:28.147302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.086 [2024-07-14 04:23:28.243730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.086 [2024-07-14 04:23:28.243791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.086 [2024-07-14 04:23:28.243909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.086 [2024-07-14 04:23:28.243912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.344 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.345 04:23:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.717 00:07:09.717 real 0m1.421s 00:07:09.717 user 0m4.724s 00:07:09.717 sys 0m0.159s 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.717 04:23:29 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:09.717 ************************************ 00:07:09.717 END TEST accel_decomp_mcore 00:07:09.717 ************************************ 00:07:09.717 04:23:29 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.717 04:23:29 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:09.717 04:23:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.717 04:23:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.717 ************************************ 00:07:09.717 START TEST accel_decomp_full_mcore 00:07:09.717 ************************************ 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:09.717 [2024-07-14 04:23:29.550156] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:09.717 [2024-07-14 04:23:29.550229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672319 ] 00:07:09.717 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.717 [2024-07-14 04:23:29.611129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.717 [2024-07-14 04:23:29.707757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.717 [2024-07-14 04:23:29.707811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.717 [2024-07-14 04:23:29.707927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.717 [2024-07-14 04:23:29.707930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.717 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.718 04:23:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.094 00:07:11.094 real 0m1.431s 00:07:11.094 user 0m4.767s 00:07:11.094 sys 0m0.163s 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.094 04:23:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:11.094 ************************************ 00:07:11.094 END TEST accel_decomp_full_mcore 00:07:11.094 ************************************ 00:07:11.094 04:23:30 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.094 04:23:30 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:11.094 04:23:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.094 04:23:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.094 ************************************ 00:07:11.094 START TEST accel_decomp_mthread 00:07:11.094 ************************************ 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:11.094 [2024-07-14 04:23:31.027200] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:11.094 [2024-07-14 04:23:31.027276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672473 ] 00:07:11.094 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.094 [2024-07-14 04:23:31.091516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.094 [2024-07-14 04:23:31.183555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 04:23:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.531 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.532 00:07:12.532 real 0m1.409s 00:07:12.532 user 0m1.272s 00:07:12.532 sys 0m0.139s 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.532 04:23:32 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:12.532 ************************************ 00:07:12.532 END TEST accel_decomp_mthread 00:07:12.532 ************************************ 00:07:12.532 04:23:32 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:12.532 04:23:32 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:12.532 04:23:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.532 04:23:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.532 ************************************ 00:07:12.532 START TEST accel_decomp_full_mthread 00:07:12.532 ************************************ 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:12.532 [2024-07-14 04:23:32.483376] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:12.532 [2024-07-14 04:23:32.483438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672761 ] 00:07:12.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.532 [2024-07-14 04:23:32.545526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.532 [2024-07-14 04:23:32.635742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.532 04:23:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.913 04:23:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.914 00:07:13.914 real 0m1.437s 00:07:13.914 user 0m1.298s 00:07:13.914 sys 0m0.142s 00:07:13.914 04:23:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.914 04:23:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:13.914 ************************************ 00:07:13.914 END TEST accel_decomp_full_mthread 00:07:13.914 ************************************ 00:07:13.914 04:23:33 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:13.914 04:23:33 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:13.914 04:23:33 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:13.914 04:23:33 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:13.914 04:23:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.914 04:23:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.914 04:23:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.914 04:23:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.914 04:23:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.914 04:23:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.914 04:23:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.914 04:23:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:13.914 04:23:33 accel -- accel/accel.sh@41 -- # jq -r . 00:07:13.914 ************************************ 00:07:13.914 START TEST accel_dif_functional_tests 00:07:13.914 ************************************ 00:07:13.914 04:23:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:13.914 [2024-07-14 04:23:33.989331] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:13.914 [2024-07-14 04:23:33.989392] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672925 ] 00:07:13.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.914 [2024-07-14 04:23:34.048818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.173 [2024-07-14 04:23:34.145145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.173 [2024-07-14 04:23:34.145213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.173 [2024-07-14 04:23:34.145215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.173 00:07:14.173 00:07:14.173 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.173 http://cunit.sourceforge.net/ 00:07:14.173 00:07:14.173 00:07:14.173 Suite: accel_dif 00:07:14.173 Test: verify: DIF generated, GUARD check ...passed 00:07:14.173 Test: verify: DIF generated, APPTAG check ...passed 00:07:14.173 Test: verify: DIF generated, REFTAG check ...passed 00:07:14.173 Test: verify: DIF not generated, GUARD check ...[2024-07-14 04:23:34.238400] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:14.173 passed 00:07:14.173 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 04:23:34.238468] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:14.173 passed 00:07:14.173 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 04:23:34.238498] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:14.173 passed 00:07:14.173 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:14.173 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 04:23:34.238559] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:14.173 passed 00:07:14.173 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:14.173 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:14.173 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:14.173 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 04:23:34.238685] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:14.173 passed 00:07:14.173 Test: verify copy: DIF generated, GUARD check ...passed 00:07:14.173 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:14.173 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:14.173 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 04:23:34.238829] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:14.173 passed 00:07:14.173 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 04:23:34.238887] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:14.173 passed 00:07:14.173 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 04:23:34.238923] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:14.173 passed 00:07:14.173 Test: generate copy: DIF generated, GUARD check ...passed 00:07:14.173 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:14.173 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:14.173 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:14.173 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:14.173 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:14.173 Test: generate copy: iovecs-len validate ...[2024-07-14 04:23:34.239141] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:14.173 passed 00:07:14.173 Test: generate copy: buffer alignment validate ...passed 00:07:14.173 00:07:14.173 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.173 suites 1 1 n/a 0 0 00:07:14.173 tests 26 26 26 0 0 00:07:14.173 asserts 115 115 115 0 n/a 00:07:14.173 00:07:14.174 Elapsed time = 0.002 seconds 00:07:14.432 00:07:14.432 real 0m0.504s 00:07:14.432 user 0m0.780s 00:07:14.432 sys 0m0.183s 00:07:14.432 04:23:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.432 04:23:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:14.432 ************************************ 00:07:14.432 END TEST accel_dif_functional_tests 00:07:14.432 ************************************ 00:07:14.432 00:07:14.432 real 0m31.789s 00:07:14.432 user 0m35.200s 00:07:14.432 sys 0m4.620s 00:07:14.432 04:23:34 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.432 04:23:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.432 ************************************ 00:07:14.432 END TEST accel 00:07:14.432 ************************************ 00:07:14.432 04:23:34 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:14.432 04:23:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.432 04:23:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.432 04:23:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.432 ************************************ 00:07:14.432 START TEST accel_rpc 00:07:14.432 ************************************ 00:07:14.432 04:23:34 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:14.432 * Looking for test storage... 00:07:14.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:14.432 04:23:34 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:14.432 04:23:34 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2673109 00:07:14.432 04:23:34 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:14.432 04:23:34 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2673109 00:07:14.432 04:23:34 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 2673109 ']' 00:07:14.432 04:23:34 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.432 04:23:34 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.432 04:23:34 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.432 04:23:34 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.432 04:23:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.690 [2024-07-14 04:23:34.628872] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:14.690 [2024-07-14 04:23:34.628967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673109 ] 00:07:14.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.690 [2024-07-14 04:23:34.685972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.690 [2024-07-14 04:23:34.770965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.690 04:23:34 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.690 04:23:34 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:14.690 04:23:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:14.690 04:23:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:14.690 04:23:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:14.690 04:23:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:14.690 04:23:34 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:14.690 04:23:34 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.690 04:23:34 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.690 04:23:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.690 ************************************ 00:07:14.690 START TEST accel_assign_opcode 00:07:14.690 ************************************ 00:07:14.690 04:23:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:14.690 04:23:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:14.690 04:23:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.690 04:23:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.690 [2024-07-14 04:23:34.859640] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:14.690 04:23:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.690 04:23:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:14.690 04:23:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.690 04:23:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.690 [2024-07-14 04:23:34.867647] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:14.690 04:23:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.691 04:23:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:14.691 04:23:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.691 04:23:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.948 04:23:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.948 04:23:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:14.948 04:23:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.948 04:23:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.948 04:23:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:14.948 04:23:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:14.948 04:23:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.207 software 00:07:15.207 00:07:15.207 real 0m0.296s 00:07:15.207 user 0m0.043s 00:07:15.207 sys 0m0.004s 00:07:15.207 04:23:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.207 04:23:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.207 ************************************ 00:07:15.207 END TEST accel_assign_opcode 00:07:15.207 ************************************ 00:07:15.207 04:23:35 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2673109 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 2673109 ']' 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 2673109 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2673109 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2673109' 00:07:15.207 killing process with pid 2673109 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@965 -- # kill 2673109 00:07:15.207 04:23:35 accel_rpc -- common/autotest_common.sh@970 -- # wait 2673109 00:07:15.465 00:07:15.466 real 0m1.093s 00:07:15.466 user 0m1.039s 00:07:15.466 sys 0m0.415s 00:07:15.466 04:23:35 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.466 04:23:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.466 ************************************ 00:07:15.466 END TEST accel_rpc 00:07:15.466 ************************************ 00:07:15.466 04:23:35 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.466 04:23:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.466 04:23:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.466 04:23:35 -- common/autotest_common.sh@10 -- # set +x 00:07:15.723 ************************************ 00:07:15.723 START TEST app_cmdline 00:07:15.723 ************************************ 00:07:15.723 04:23:35 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.723 * Looking for test storage... 00:07:15.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:15.723 04:23:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.723 04:23:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2673313 00:07:15.723 04:23:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.724 04:23:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2673313 00:07:15.724 04:23:35 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 2673313 ']' 00:07:15.724 04:23:35 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.724 04:23:35 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.724 04:23:35 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.724 04:23:35 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.724 04:23:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.724 [2024-07-14 04:23:35.771720] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:15.724 [2024-07-14 04:23:35.771811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673313 ] 00:07:15.724 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.724 [2024-07-14 04:23:35.831190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.981 [2024-07-14 04:23:35.916641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.981 04:23:36 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.981 04:23:36 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:15.981 04:23:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:16.239 { 00:07:16.239 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:16.239 "fields": { 00:07:16.239 "major": 24, 00:07:16.239 "minor": 5, 00:07:16.239 "patch": 1, 00:07:16.239 "suffix": "-pre", 00:07:16.239 "commit": "5fa2f5086" 00:07:16.239 } 00:07:16.239 } 00:07:16.239 04:23:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:16.239 04:23:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:16.239 04:23:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:16.239 04:23:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:16.239 04:23:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:16.239 04:23:36 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.239 04:23:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:16.239 04:23:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.239 04:23:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:16.239 04:23:36 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.497 04:23:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:16.497 04:23:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:16.497 04:23:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.497 04:23:36 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:16.497 04:23:36 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.497 04:23:36 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.497 04:23:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.497 04:23:36 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.497 04:23:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.497 04:23:36 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.497 04:23:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.498 request: 00:07:16.498 { 00:07:16.498 "method": "env_dpdk_get_mem_stats", 00:07:16.498 "req_id": 1 00:07:16.498 } 00:07:16.498 Got JSON-RPC error response 00:07:16.498 response: 00:07:16.498 { 00:07:16.498 "code": -32601, 00:07:16.498 "message": "Method not found" 00:07:16.498 } 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.498 04:23:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2673313 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 2673313 ']' 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 2673313 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.498 04:23:36 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2673313 00:07:16.757 04:23:36 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:16.757 04:23:36 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:16.757 04:23:36 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2673313' 00:07:16.757 killing process with pid 2673313 00:07:16.757 04:23:36 app_cmdline -- common/autotest_common.sh@965 -- # kill 2673313 00:07:16.757 04:23:36 app_cmdline -- common/autotest_common.sh@970 -- # wait 2673313 00:07:17.015 00:07:17.015 real 0m1.445s 00:07:17.015 user 0m1.762s 00:07:17.015 sys 0m0.460s 00:07:17.015 04:23:37 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.015 04:23:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.015 ************************************ 00:07:17.015 END TEST app_cmdline 00:07:17.015 ************************************ 00:07:17.015 04:23:37 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:17.015 04:23:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.015 04:23:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.015 04:23:37 -- common/autotest_common.sh@10 -- # set +x 00:07:17.015 ************************************ 00:07:17.015 START TEST version 00:07:17.015 ************************************ 00:07:17.015 04:23:37 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:17.015 * Looking for test storage... 00:07:17.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:17.274 04:23:37 version -- app/version.sh@17 -- # get_header_version major 00:07:17.274 04:23:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:17.274 04:23:37 version -- app/version.sh@14 -- # cut -f2 00:07:17.274 04:23:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.274 04:23:37 version -- app/version.sh@17 -- # major=24 00:07:17.274 04:23:37 version -- app/version.sh@18 -- # get_header_version minor 00:07:17.274 04:23:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:17.274 04:23:37 version -- app/version.sh@14 -- # cut -f2 00:07:17.274 04:23:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.274 04:23:37 version -- app/version.sh@18 -- # minor=5 00:07:17.274 04:23:37 version -- app/version.sh@19 -- # get_header_version patch 00:07:17.274 04:23:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:17.274 04:23:37 version -- app/version.sh@14 -- # cut -f2 00:07:17.274 04:23:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.274 04:23:37 version -- app/version.sh@19 -- # patch=1 00:07:17.274 04:23:37 version -- app/version.sh@20 -- # get_header_version suffix 00:07:17.274 04:23:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:17.274 04:23:37 version -- app/version.sh@14 -- # cut -f2 00:07:17.274 04:23:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.274 04:23:37 version -- app/version.sh@20 -- # suffix=-pre 00:07:17.274 04:23:37 version -- app/version.sh@22 -- # version=24.5 00:07:17.274 04:23:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:17.274 04:23:37 version -- app/version.sh@25 -- # version=24.5.1 00:07:17.274 04:23:37 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:17.274 04:23:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:17.274 04:23:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:17.274 04:23:37 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:17.274 04:23:37 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:17.274 00:07:17.274 real 0m0.096s 00:07:17.274 user 0m0.047s 00:07:17.274 sys 0m0.069s 00:07:17.274 04:23:37 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.274 04:23:37 version -- common/autotest_common.sh@10 -- # set +x 00:07:17.274 ************************************ 00:07:17.274 END TEST version 00:07:17.274 ************************************ 00:07:17.274 04:23:37 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:17.274 04:23:37 -- spdk/autotest.sh@198 -- # uname -s 00:07:17.274 04:23:37 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:17.274 04:23:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:17.274 04:23:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:17.274 04:23:37 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:17.274 04:23:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:17.274 04:23:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:17.274 04:23:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.274 04:23:37 -- common/autotest_common.sh@10 -- # set +x 00:07:17.274 04:23:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:17.274 04:23:37 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:17.274 04:23:37 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:17.274 04:23:37 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:17.274 04:23:37 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:17.274 04:23:37 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:17.274 04:23:37 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:17.274 04:23:37 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:17.274 04:23:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.274 04:23:37 -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 ************************************ 00:07:17.275 START TEST nvmf_tcp 00:07:17.275 ************************************ 00:07:17.275 04:23:37 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:17.275 * Looking for test storage... 00:07:17.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.275 04:23:37 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.275 04:23:37 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.275 04:23:37 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.275 04:23:37 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.275 04:23:37 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.275 04:23:37 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.275 04:23:37 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:17.275 04:23:37 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:17.275 04:23:37 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:17.275 04:23:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:17.275 04:23:37 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:17.275 04:23:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:17.275 04:23:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.275 04:23:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 ************************************ 00:07:17.275 START TEST nvmf_example 00:07:17.275 ************************************ 00:07:17.275 04:23:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:17.275 * Looking for test storage... 00:07:17.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.275 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.275 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:17.535 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.535 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.536 04:23:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:19.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.441 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:19.442 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:19.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:19.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:19.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:07:19.442 00:07:19.442 --- 10.0.0.2 ping statistics --- 00:07:19.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.442 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:07:19.442 00:07:19.442 --- 10.0.0.1 ping statistics --- 00:07:19.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.442 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2675218 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2675218 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 2675218 ']' 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:19.442 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.700 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.700 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:19.700 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:19.700 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:19.700 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.700 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.700 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:19.700 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.700 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:19.959 04:23:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:19.959 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.936 Initializing NVMe Controllers 00:07:29.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:29.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:29.936 Initialization complete. Launching workers. 00:07:29.936 ======================================================== 00:07:29.936 Latency(us) 00:07:29.936 Device Information : IOPS MiB/s Average min max 00:07:29.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14885.00 58.14 4302.03 879.08 19098.03 00:07:29.936 ======================================================== 00:07:29.936 Total : 14885.00 58.14 4302.03 879.08 19098.03 00:07:29.936 00:07:29.936 04:23:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:29.936 04:23:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:29.936 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.936 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:29.936 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.936 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:29.936 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.936 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.936 rmmod nvme_tcp 00:07:29.936 rmmod nvme_fabrics 00:07:30.206 rmmod nvme_keyring 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2675218 ']' 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2675218 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 2675218 ']' 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 2675218 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2675218 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2675218' 00:07:30.206 killing process with pid 2675218 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 2675218 00:07:30.206 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 2675218 00:07:30.467 nvmf threads initialize successfully 00:07:30.467 bdev subsystem init successfully 00:07:30.467 created a nvmf target service 00:07:30.467 create targets's poll groups done 00:07:30.467 all subsystems of target started 00:07:30.467 nvmf target is running 00:07:30.467 all subsystems of target stopped 00:07:30.467 destroy targets's poll groups done 00:07:30.467 destroyed the nvmf target service 00:07:30.467 bdev subsystem finish successfully 00:07:30.467 nvmf threads destroy successfully 00:07:30.467 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.467 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.467 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.467 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.467 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.467 04:23:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.467 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.467 04:23:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.376 04:23:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:32.376 04:23:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:32.376 04:23:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.377 04:23:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.377 00:07:32.377 real 0m15.053s 00:07:32.377 user 0m41.829s 00:07:32.377 sys 0m3.226s 00:07:32.377 04:23:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.377 04:23:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.377 ************************************ 00:07:32.377 END TEST nvmf_example 00:07:32.377 ************************************ 00:07:32.377 04:23:52 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:32.377 04:23:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:32.377 04:23:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.377 04:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.377 ************************************ 00:07:32.377 START TEST nvmf_filesystem 00:07:32.377 ************************************ 00:07:32.377 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:32.638 * Looking for test storage... 00:07:32.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:32.638 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:32.639 #define SPDK_CONFIG_H 00:07:32.639 #define SPDK_CONFIG_APPS 1 00:07:32.639 #define SPDK_CONFIG_ARCH native 00:07:32.639 #undef SPDK_CONFIG_ASAN 00:07:32.639 #undef SPDK_CONFIG_AVAHI 00:07:32.639 #undef SPDK_CONFIG_CET 00:07:32.639 #define SPDK_CONFIG_COVERAGE 1 00:07:32.639 #define SPDK_CONFIG_CROSS_PREFIX 00:07:32.639 #undef SPDK_CONFIG_CRYPTO 00:07:32.639 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:32.639 #undef SPDK_CONFIG_CUSTOMOCF 00:07:32.639 #undef SPDK_CONFIG_DAOS 00:07:32.639 #define SPDK_CONFIG_DAOS_DIR 00:07:32.639 #define SPDK_CONFIG_DEBUG 1 00:07:32.639 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:32.639 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:32.639 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:32.639 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:32.639 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:32.639 #undef SPDK_CONFIG_DPDK_UADK 00:07:32.639 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:32.639 #define SPDK_CONFIG_EXAMPLES 1 00:07:32.639 #undef SPDK_CONFIG_FC 00:07:32.639 #define SPDK_CONFIG_FC_PATH 00:07:32.639 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:32.639 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:32.639 #undef SPDK_CONFIG_FUSE 00:07:32.639 #undef SPDK_CONFIG_FUZZER 00:07:32.639 #define SPDK_CONFIG_FUZZER_LIB 00:07:32.639 #undef SPDK_CONFIG_GOLANG 00:07:32.639 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:32.639 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:32.639 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:32.639 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:32.639 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:32.639 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:32.639 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:32.639 #define SPDK_CONFIG_IDXD 1 00:07:32.639 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:32.639 #undef SPDK_CONFIG_IPSEC_MB 00:07:32.639 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:32.639 #define SPDK_CONFIG_ISAL 1 00:07:32.639 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:32.639 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:32.639 #define SPDK_CONFIG_LIBDIR 00:07:32.639 #undef SPDK_CONFIG_LTO 00:07:32.639 #define SPDK_CONFIG_MAX_LCORES 00:07:32.639 #define SPDK_CONFIG_NVME_CUSE 1 00:07:32.639 #undef SPDK_CONFIG_OCF 00:07:32.639 #define SPDK_CONFIG_OCF_PATH 00:07:32.639 #define SPDK_CONFIG_OPENSSL_PATH 00:07:32.639 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:32.639 #define SPDK_CONFIG_PGO_DIR 00:07:32.639 #undef SPDK_CONFIG_PGO_USE 00:07:32.639 #define SPDK_CONFIG_PREFIX /usr/local 00:07:32.639 #undef SPDK_CONFIG_RAID5F 00:07:32.639 #undef SPDK_CONFIG_RBD 00:07:32.639 #define SPDK_CONFIG_RDMA 1 00:07:32.639 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:32.639 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:32.639 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:32.639 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:32.639 #define SPDK_CONFIG_SHARED 1 00:07:32.639 #undef SPDK_CONFIG_SMA 00:07:32.639 #define SPDK_CONFIG_TESTS 1 00:07:32.639 #undef SPDK_CONFIG_TSAN 00:07:32.639 #define SPDK_CONFIG_UBLK 1 00:07:32.639 #define SPDK_CONFIG_UBSAN 1 00:07:32.639 #undef SPDK_CONFIG_UNIT_TESTS 00:07:32.639 #undef SPDK_CONFIG_URING 00:07:32.639 #define SPDK_CONFIG_URING_PATH 00:07:32.639 #undef SPDK_CONFIG_URING_ZNS 00:07:32.639 #undef SPDK_CONFIG_USDT 00:07:32.639 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:32.639 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:32.639 #define SPDK_CONFIG_VFIO_USER 1 00:07:32.639 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:32.639 #define SPDK_CONFIG_VHOST 1 00:07:32.639 #define SPDK_CONFIG_VIRTIO 1 00:07:32.639 #undef SPDK_CONFIG_VTUNE 00:07:32.639 #define SPDK_CONFIG_VTUNE_DIR 00:07:32.639 #define SPDK_CONFIG_WERROR 1 00:07:32.639 #define SPDK_CONFIG_WPDK_DIR 00:07:32.639 #undef SPDK_CONFIG_XNVME 00:07:32.639 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:32.639 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.640 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 2676791 ]] 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 2676791 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.ylabJM 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ylabJM/tests/target /tmp/spdk.ylabJM 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=53492346880 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994708992 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8502362112 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941716480 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997352448 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996168704 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1187840 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:32.641 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:32.642 * Looking for test storage... 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=53492346880 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=10716954624 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.642 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.643 04:23:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.643 04:23:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:34.614 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:34.614 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:34.614 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.614 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:34.614 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:34.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:34.615 00:07:34.615 --- 10.0.0.2 ping statistics --- 00:07:34.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.615 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:07:34.615 00:07:34.615 --- 10.0.0.1 ping statistics --- 00:07:34.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.615 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.615 ************************************ 00:07:34.615 START TEST nvmf_filesystem_no_in_capsule 00:07:34.615 ************************************ 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2678416 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2678416 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2678416 ']' 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:34.615 04:23:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.615 [2024-07-14 04:23:54.761555] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:34.615 [2024-07-14 04:23:54.761654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.615 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.873 [2024-07-14 04:23:54.832228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.873 [2024-07-14 04:23:54.925665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.873 [2024-07-14 04:23:54.925728] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.873 [2024-07-14 04:23:54.925754] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.873 [2024-07-14 04:23:54.925768] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.873 [2024-07-14 04:23:54.925781] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.873 [2024-07-14 04:23:54.925884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.873 [2024-07-14 04:23:54.925920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.873 [2024-07-14 04:23:54.926038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.873 [2024-07-14 04:23:54.926040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.873 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:34.873 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:34.873 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.873 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.873 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 [2024-07-14 04:23:55.075471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 Malloc1 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 [2024-07-14 04:23:55.262281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.131 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:35.131 { 00:07:35.131 "name": "Malloc1", 00:07:35.131 "aliases": [ 00:07:35.131 "75ea3594-a3bb-4333-958a-a36211897e09" 00:07:35.131 ], 00:07:35.131 "product_name": "Malloc disk", 00:07:35.131 "block_size": 512, 00:07:35.131 "num_blocks": 1048576, 00:07:35.131 "uuid": "75ea3594-a3bb-4333-958a-a36211897e09", 00:07:35.131 "assigned_rate_limits": { 00:07:35.131 "rw_ios_per_sec": 0, 00:07:35.131 "rw_mbytes_per_sec": 0, 00:07:35.131 "r_mbytes_per_sec": 0, 00:07:35.131 "w_mbytes_per_sec": 0 00:07:35.131 }, 00:07:35.131 "claimed": true, 00:07:35.131 "claim_type": "exclusive_write", 00:07:35.131 "zoned": false, 00:07:35.131 "supported_io_types": { 00:07:35.131 "read": true, 00:07:35.131 "write": true, 00:07:35.131 "unmap": true, 00:07:35.131 "write_zeroes": true, 00:07:35.131 "flush": true, 00:07:35.131 "reset": true, 00:07:35.131 "compare": false, 00:07:35.131 "compare_and_write": false, 00:07:35.131 "abort": true, 00:07:35.131 "nvme_admin": false, 00:07:35.131 "nvme_io": false 00:07:35.131 }, 00:07:35.131 "memory_domains": [ 00:07:35.132 { 00:07:35.132 "dma_device_id": "system", 00:07:35.132 "dma_device_type": 1 00:07:35.132 }, 00:07:35.132 { 00:07:35.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.132 "dma_device_type": 2 00:07:35.132 } 00:07:35.132 ], 00:07:35.132 "driver_specific": {} 00:07:35.132 } 00:07:35.132 ]' 00:07:35.132 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:35.132 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:35.132 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:35.391 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:35.391 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:35.391 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:35.391 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:35.391 04:23:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.959 04:23:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.959 04:23:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:35.959 04:23:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.959 04:23:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:35.959 04:23:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:37.867 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:38.127 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:38.127 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:38.127 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:38.127 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:38.127 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:38.127 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:38.127 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:38.127 04:23:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:39.063 04:23:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.000 ************************************ 00:07:40.000 START TEST filesystem_ext4 00:07:40.000 ************************************ 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:40.000 04:24:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:40.000 mke2fs 1.46.5 (30-Dec-2021) 00:07:40.259 Discarding device blocks: 0/522240 done 00:07:40.259 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:40.259 Filesystem UUID: 45fa3d89-9d1f-4ae2-8291-b82132a7c655 00:07:40.259 Superblock backups stored on blocks: 00:07:40.259 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:40.259 00:07:40.259 Allocating group tables: 0/64 done 00:07:40.259 Writing inode tables: 0/64 done 00:07:43.547 Creating journal (8192 blocks): done 00:07:43.547 Writing superblocks and filesystem accounting information: 0/64 done 00:07:43.547 00:07:43.547 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:43.547 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2678416 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.806 00:07:43.806 real 0m3.819s 00:07:43.806 user 0m0.016s 00:07:43.806 sys 0m0.058s 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:43.806 ************************************ 00:07:43.806 END TEST filesystem_ext4 00:07:43.806 ************************************ 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.806 04:24:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.065 ************************************ 00:07:44.065 START TEST filesystem_btrfs 00:07:44.065 ************************************ 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:44.065 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:44.325 btrfs-progs v6.6.2 00:07:44.325 See https://btrfs.readthedocs.io for more information. 00:07:44.325 00:07:44.325 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:44.325 NOTE: several default settings have changed in version 5.15, please make sure 00:07:44.325 this does not affect your deployments: 00:07:44.325 - DUP for metadata (-m dup) 00:07:44.325 - enabled no-holes (-O no-holes) 00:07:44.325 - enabled free-space-tree (-R free-space-tree) 00:07:44.325 00:07:44.325 Label: (null) 00:07:44.325 UUID: a0aed7e3-f1d9-441e-afaa-7bb22fb75178 00:07:44.325 Node size: 16384 00:07:44.325 Sector size: 4096 00:07:44.325 Filesystem size: 510.00MiB 00:07:44.325 Block group profiles: 00:07:44.325 Data: single 8.00MiB 00:07:44.325 Metadata: DUP 32.00MiB 00:07:44.325 System: DUP 8.00MiB 00:07:44.325 SSD detected: yes 00:07:44.325 Zoned device: no 00:07:44.325 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:44.325 Runtime features: free-space-tree 00:07:44.325 Checksum: crc32c 00:07:44.325 Number of devices: 1 00:07:44.325 Devices: 00:07:44.325 ID SIZE PATH 00:07:44.325 1 510.00MiB /dev/nvme0n1p1 00:07:44.325 00:07:44.325 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:44.325 04:24:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2678416 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.265 00:07:45.265 real 0m1.341s 00:07:45.265 user 0m0.021s 00:07:45.265 sys 0m0.113s 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:45.265 ************************************ 00:07:45.265 END TEST filesystem_btrfs 00:07:45.265 ************************************ 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.265 ************************************ 00:07:45.265 START TEST filesystem_xfs 00:07:45.265 ************************************ 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:45.265 04:24:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:45.525 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:45.525 = sectsz=512 attr=2, projid32bit=1 00:07:45.525 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:45.525 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:45.525 data = bsize=4096 blocks=130560, imaxpct=25 00:07:45.525 = sunit=0 swidth=0 blks 00:07:45.525 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:45.525 log =internal log bsize=4096 blocks=16384, version=2 00:07:45.525 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:45.525 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:46.464 Discarding blocks...Done. 00:07:46.464 04:24:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:46.464 04:24:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.998 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.998 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:48.998 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.998 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:48.999 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:48.999 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.999 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2678416 00:07:48.999 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.999 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.999 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.999 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.999 00:07:48.999 real 0m3.589s 00:07:48.999 user 0m0.014s 00:07:48.999 sys 0m0.057s 00:07:48.999 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.999 04:24:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:48.999 ************************************ 00:07:48.999 END TEST filesystem_xfs 00:07:48.999 ************************************ 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.999 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2678416 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2678416 ']' 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2678416 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2678416 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2678416' 00:07:49.258 killing process with pid 2678416 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 2678416 00:07:49.258 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 2678416 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:49.517 00:07:49.517 real 0m14.942s 00:07:49.517 user 0m57.504s 00:07:49.517 sys 0m2.009s 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.517 ************************************ 00:07:49.517 END TEST nvmf_filesystem_no_in_capsule 00:07:49.517 ************************************ 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.517 ************************************ 00:07:49.517 START TEST nvmf_filesystem_in_capsule 00:07:49.517 ************************************ 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2681001 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2681001 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2681001 ']' 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.517 04:24:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.776 [2024-07-14 04:24:09.747256] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:49.776 [2024-07-14 04:24:09.747322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.776 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.776 [2024-07-14 04:24:09.812425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.776 [2024-07-14 04:24:09.903905] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.776 [2024-07-14 04:24:09.903969] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.776 [2024-07-14 04:24:09.903985] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.776 [2024-07-14 04:24:09.903999] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.776 [2024-07-14 04:24:09.904011] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.776 [2024-07-14 04:24:09.904072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.776 [2024-07-14 04:24:09.904130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.776 [2024-07-14 04:24:09.904246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.776 [2024-07-14 04:24:09.904248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.036 [2024-07-14 04:24:10.053755] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.036 Malloc1 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.036 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.298 [2024-07-14 04:24:10.240339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:50.298 { 00:07:50.298 "name": "Malloc1", 00:07:50.298 "aliases": [ 00:07:50.298 "8e26f778-b834-4598-a037-138032330280" 00:07:50.298 ], 00:07:50.298 "product_name": "Malloc disk", 00:07:50.298 "block_size": 512, 00:07:50.298 "num_blocks": 1048576, 00:07:50.298 "uuid": "8e26f778-b834-4598-a037-138032330280", 00:07:50.298 "assigned_rate_limits": { 00:07:50.298 "rw_ios_per_sec": 0, 00:07:50.298 "rw_mbytes_per_sec": 0, 00:07:50.298 "r_mbytes_per_sec": 0, 00:07:50.298 "w_mbytes_per_sec": 0 00:07:50.298 }, 00:07:50.298 "claimed": true, 00:07:50.298 "claim_type": "exclusive_write", 00:07:50.298 "zoned": false, 00:07:50.298 "supported_io_types": { 00:07:50.298 "read": true, 00:07:50.298 "write": true, 00:07:50.298 "unmap": true, 00:07:50.298 "write_zeroes": true, 00:07:50.298 "flush": true, 00:07:50.298 "reset": true, 00:07:50.298 "compare": false, 00:07:50.298 "compare_and_write": false, 00:07:50.298 "abort": true, 00:07:50.298 "nvme_admin": false, 00:07:50.298 "nvme_io": false 00:07:50.298 }, 00:07:50.298 "memory_domains": [ 00:07:50.298 { 00:07:50.298 "dma_device_id": "system", 00:07:50.298 "dma_device_type": 1 00:07:50.298 }, 00:07:50.298 { 00:07:50.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.298 "dma_device_type": 2 00:07:50.298 } 00:07:50.298 ], 00:07:50.298 "driver_specific": {} 00:07:50.298 } 00:07:50.298 ]' 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:50.298 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.892 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.892 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:50.893 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.893 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:50.893 04:24:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:52.797 04:24:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:53.362 04:24:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:54.295 04:24:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.231 ************************************ 00:07:55.231 START TEST filesystem_in_capsule_ext4 00:07:55.231 ************************************ 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:55.231 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:55.232 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:55.232 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:55.232 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:55.232 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:55.232 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:55.232 04:24:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:55.232 mke2fs 1.46.5 (30-Dec-2021) 00:07:55.489 Discarding device blocks: 0/522240 done 00:07:55.489 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:55.489 Filesystem UUID: 063c9bfe-eb94-43e7-a8df-e800da010d31 00:07:55.490 Superblock backups stored on blocks: 00:07:55.490 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:55.490 00:07:55.490 Allocating group tables: 0/64 done 00:07:55.490 Writing inode tables: 0/64 done 00:07:55.490 Creating journal (8192 blocks): done 00:07:56.574 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:07:56.574 00:07:56.574 04:24:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:56.574 04:24:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2681001 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.515 00:07:57.515 real 0m2.145s 00:07:57.515 user 0m0.022s 00:07:57.515 sys 0m0.050s 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:57.515 ************************************ 00:07:57.515 END TEST filesystem_in_capsule_ext4 00:07:57.515 ************************************ 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.515 ************************************ 00:07:57.515 START TEST filesystem_in_capsule_btrfs 00:07:57.515 ************************************ 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:57.515 04:24:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:58.083 btrfs-progs v6.6.2 00:07:58.083 See https://btrfs.readthedocs.io for more information. 00:07:58.083 00:07:58.083 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:58.083 NOTE: several default settings have changed in version 5.15, please make sure 00:07:58.083 this does not affect your deployments: 00:07:58.083 - DUP for metadata (-m dup) 00:07:58.083 - enabled no-holes (-O no-holes) 00:07:58.083 - enabled free-space-tree (-R free-space-tree) 00:07:58.083 00:07:58.083 Label: (null) 00:07:58.083 UUID: 7e2468c8-5fed-4b9d-9f3f-ca644771e3d8 00:07:58.083 Node size: 16384 00:07:58.083 Sector size: 4096 00:07:58.083 Filesystem size: 510.00MiB 00:07:58.083 Block group profiles: 00:07:58.083 Data: single 8.00MiB 00:07:58.083 Metadata: DUP 32.00MiB 00:07:58.083 System: DUP 8.00MiB 00:07:58.083 SSD detected: yes 00:07:58.083 Zoned device: no 00:07:58.083 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:58.083 Runtime features: free-space-tree 00:07:58.083 Checksum: crc32c 00:07:58.083 Number of devices: 1 00:07:58.083 Devices: 00:07:58.083 ID SIZE PATH 00:07:58.083 1 510.00MiB /dev/nvme0n1p1 00:07:58.083 00:07:58.083 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:58.083 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2681001 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.023 00:07:59.023 real 0m1.378s 00:07:59.023 user 0m0.020s 00:07:59.023 sys 0m0.110s 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.023 ************************************ 00:07:59.023 END TEST filesystem_in_capsule_btrfs 00:07:59.023 ************************************ 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.023 ************************************ 00:07:59.023 START TEST filesystem_in_capsule_xfs 00:07:59.023 ************************************ 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:59.023 04:24:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:59.023 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:59.023 = sectsz=512 attr=2, projid32bit=1 00:07:59.023 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:59.023 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:59.023 data = bsize=4096 blocks=130560, imaxpct=25 00:07:59.023 = sunit=0 swidth=0 blks 00:07:59.023 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:59.023 log =internal log bsize=4096 blocks=16384, version=2 00:07:59.023 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:59.023 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:59.958 Discarding blocks...Done. 00:07:59.958 04:24:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:59.958 04:24:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2681001 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:01.862 00:08:01.862 real 0m2.676s 00:08:01.862 user 0m0.015s 00:08:01.862 sys 0m0.058s 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:01.862 ************************************ 00:08:01.862 END TEST filesystem_in_capsule_xfs 00:08:01.862 ************************************ 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:01.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2681001 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2681001 ']' 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2681001 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2681001 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2681001' 00:08:01.862 killing process with pid 2681001 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 2681001 00:08:01.862 04:24:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 2681001 00:08:02.121 04:24:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:02.121 00:08:02.121 real 0m12.597s 00:08:02.121 user 0m48.460s 00:08:02.121 sys 0m1.821s 00:08:02.121 04:24:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.121 04:24:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.121 ************************************ 00:08:02.121 END TEST nvmf_filesystem_in_capsule 00:08:02.121 ************************************ 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.380 rmmod nvme_tcp 00:08:02.380 rmmod nvme_fabrics 00:08:02.380 rmmod nvme_keyring 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:02.380 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.381 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.381 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.381 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.381 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.381 04:24:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.381 04:24:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.381 04:24:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.287 04:24:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.287 00:08:04.287 real 0m31.914s 00:08:04.287 user 1m46.787s 00:08:04.287 sys 0m5.368s 00:08:04.287 04:24:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.287 04:24:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.287 ************************************ 00:08:04.287 END TEST nvmf_filesystem 00:08:04.287 ************************************ 00:08:04.287 04:24:24 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:04.287 04:24:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:04.287 04:24:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.287 04:24:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:04.546 ************************************ 00:08:04.546 START TEST nvmf_target_discovery 00:08:04.546 ************************************ 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:04.546 * Looking for test storage... 00:08:04.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.546 04:24:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:04.547 04:24:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:06.455 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:06.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.455 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:06.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:06.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:06.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:08:06.456 00:08:06.456 --- 10.0.0.2 ping statistics --- 00:08:06.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.456 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:08:06.456 00:08:06.456 --- 10.0.0.1 ping statistics --- 00:08:06.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.456 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2684608 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2684608 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 2684608 ']' 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:06.456 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.740 [2024-07-14 04:24:26.662409] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:06.740 [2024-07-14 04:24:26.662506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.740 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.740 [2024-07-14 04:24:26.729102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.740 [2024-07-14 04:24:26.816992] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.740 [2024-07-14 04:24:26.817054] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.740 [2024-07-14 04:24:26.817068] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.740 [2024-07-14 04:24:26.817080] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.740 [2024-07-14 04:24:26.817090] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.740 [2024-07-14 04:24:26.817140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.740 [2024-07-14 04:24:26.817202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.740 [2024-07-14 04:24:26.817269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.740 [2024-07-14 04:24:26.817272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.999 [2024-07-14 04:24:26.981596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.999 04:24:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.999 Null1 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.999 [2024-07-14 04:24:27.021912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:06.999 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 Null2 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 Null3 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 Null4 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.000 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:07.260 00:08:07.260 Discovery Log Number of Records 6, Generation counter 6 00:08:07.260 =====Discovery Log Entry 0====== 00:08:07.260 trtype: tcp 00:08:07.260 adrfam: ipv4 00:08:07.260 subtype: current discovery subsystem 00:08:07.260 treq: not required 00:08:07.260 portid: 0 00:08:07.260 trsvcid: 4420 00:08:07.260 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.260 traddr: 10.0.0.2 00:08:07.260 eflags: explicit discovery connections, duplicate discovery information 00:08:07.260 sectype: none 00:08:07.260 =====Discovery Log Entry 1====== 00:08:07.260 trtype: tcp 00:08:07.260 adrfam: ipv4 00:08:07.260 subtype: nvme subsystem 00:08:07.260 treq: not required 00:08:07.260 portid: 0 00:08:07.260 trsvcid: 4420 00:08:07.260 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:07.260 traddr: 10.0.0.2 00:08:07.260 eflags: none 00:08:07.260 sectype: none 00:08:07.260 =====Discovery Log Entry 2====== 00:08:07.260 trtype: tcp 00:08:07.260 adrfam: ipv4 00:08:07.260 subtype: nvme subsystem 00:08:07.260 treq: not required 00:08:07.260 portid: 0 00:08:07.260 trsvcid: 4420 00:08:07.260 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:07.260 traddr: 10.0.0.2 00:08:07.260 eflags: none 00:08:07.260 sectype: none 00:08:07.260 =====Discovery Log Entry 3====== 00:08:07.260 trtype: tcp 00:08:07.260 adrfam: ipv4 00:08:07.260 subtype: nvme subsystem 00:08:07.260 treq: not required 00:08:07.260 portid: 0 00:08:07.260 trsvcid: 4420 00:08:07.260 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:07.260 traddr: 10.0.0.2 00:08:07.260 eflags: none 00:08:07.260 sectype: none 00:08:07.260 =====Discovery Log Entry 4====== 00:08:07.260 trtype: tcp 00:08:07.260 adrfam: ipv4 00:08:07.260 subtype: nvme subsystem 00:08:07.260 treq: not required 00:08:07.260 portid: 0 00:08:07.260 trsvcid: 4420 00:08:07.260 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:07.260 traddr: 10.0.0.2 00:08:07.260 eflags: none 00:08:07.260 sectype: none 00:08:07.260 =====Discovery Log Entry 5====== 00:08:07.260 trtype: tcp 00:08:07.260 adrfam: ipv4 00:08:07.260 subtype: discovery subsystem referral 00:08:07.260 treq: not required 00:08:07.260 portid: 0 00:08:07.260 trsvcid: 4430 00:08:07.260 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.260 traddr: 10.0.0.2 00:08:07.260 eflags: none 00:08:07.260 sectype: none 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:07.260 Perform nvmf subsystem discovery via RPC 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.260 [ 00:08:07.260 { 00:08:07.260 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:07.260 "subtype": "Discovery", 00:08:07.260 "listen_addresses": [ 00:08:07.260 { 00:08:07.260 "trtype": "TCP", 00:08:07.260 "adrfam": "IPv4", 00:08:07.260 "traddr": "10.0.0.2", 00:08:07.260 "trsvcid": "4420" 00:08:07.260 } 00:08:07.260 ], 00:08:07.260 "allow_any_host": true, 00:08:07.260 "hosts": [] 00:08:07.260 }, 00:08:07.260 { 00:08:07.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.260 "subtype": "NVMe", 00:08:07.260 "listen_addresses": [ 00:08:07.260 { 00:08:07.260 "trtype": "TCP", 00:08:07.260 "adrfam": "IPv4", 00:08:07.260 "traddr": "10.0.0.2", 00:08:07.260 "trsvcid": "4420" 00:08:07.260 } 00:08:07.260 ], 00:08:07.260 "allow_any_host": true, 00:08:07.260 "hosts": [], 00:08:07.260 "serial_number": "SPDK00000000000001", 00:08:07.260 "model_number": "SPDK bdev Controller", 00:08:07.260 "max_namespaces": 32, 00:08:07.260 "min_cntlid": 1, 00:08:07.260 "max_cntlid": 65519, 00:08:07.260 "namespaces": [ 00:08:07.260 { 00:08:07.260 "nsid": 1, 00:08:07.260 "bdev_name": "Null1", 00:08:07.260 "name": "Null1", 00:08:07.260 "nguid": "6DA850BCEFEA41988F07B29D04D19A5F", 00:08:07.260 "uuid": "6da850bc-efea-4198-8f07-b29d04d19a5f" 00:08:07.260 } 00:08:07.260 ] 00:08:07.260 }, 00:08:07.260 { 00:08:07.260 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:07.260 "subtype": "NVMe", 00:08:07.260 "listen_addresses": [ 00:08:07.260 { 00:08:07.260 "trtype": "TCP", 00:08:07.260 "adrfam": "IPv4", 00:08:07.260 "traddr": "10.0.0.2", 00:08:07.260 "trsvcid": "4420" 00:08:07.260 } 00:08:07.260 ], 00:08:07.260 "allow_any_host": true, 00:08:07.260 "hosts": [], 00:08:07.260 "serial_number": "SPDK00000000000002", 00:08:07.260 "model_number": "SPDK bdev Controller", 00:08:07.260 "max_namespaces": 32, 00:08:07.260 "min_cntlid": 1, 00:08:07.260 "max_cntlid": 65519, 00:08:07.260 "namespaces": [ 00:08:07.260 { 00:08:07.260 "nsid": 1, 00:08:07.260 "bdev_name": "Null2", 00:08:07.260 "name": "Null2", 00:08:07.260 "nguid": "6727F315261A4D8BA18BD070FD9E9080", 00:08:07.260 "uuid": "6727f315-261a-4d8b-a18b-d070fd9e9080" 00:08:07.260 } 00:08:07.260 ] 00:08:07.260 }, 00:08:07.260 { 00:08:07.260 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:07.260 "subtype": "NVMe", 00:08:07.260 "listen_addresses": [ 00:08:07.260 { 00:08:07.260 "trtype": "TCP", 00:08:07.260 "adrfam": "IPv4", 00:08:07.260 "traddr": "10.0.0.2", 00:08:07.260 "trsvcid": "4420" 00:08:07.260 } 00:08:07.260 ], 00:08:07.260 "allow_any_host": true, 00:08:07.260 "hosts": [], 00:08:07.260 "serial_number": "SPDK00000000000003", 00:08:07.260 "model_number": "SPDK bdev Controller", 00:08:07.260 "max_namespaces": 32, 00:08:07.260 "min_cntlid": 1, 00:08:07.260 "max_cntlid": 65519, 00:08:07.260 "namespaces": [ 00:08:07.260 { 00:08:07.260 "nsid": 1, 00:08:07.260 "bdev_name": "Null3", 00:08:07.260 "name": "Null3", 00:08:07.260 "nguid": "EAF0D892A3D147A3A5148EAEB3A73B89", 00:08:07.260 "uuid": "eaf0d892-a3d1-47a3-a514-8eaeb3a73b89" 00:08:07.260 } 00:08:07.260 ] 00:08:07.260 }, 00:08:07.260 { 00:08:07.260 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:07.260 "subtype": "NVMe", 00:08:07.260 "listen_addresses": [ 00:08:07.260 { 00:08:07.260 "trtype": "TCP", 00:08:07.260 "adrfam": "IPv4", 00:08:07.260 "traddr": "10.0.0.2", 00:08:07.260 "trsvcid": "4420" 00:08:07.260 } 00:08:07.260 ], 00:08:07.260 "allow_any_host": true, 00:08:07.260 "hosts": [], 00:08:07.260 "serial_number": "SPDK00000000000004", 00:08:07.260 "model_number": "SPDK bdev Controller", 00:08:07.260 "max_namespaces": 32, 00:08:07.260 "min_cntlid": 1, 00:08:07.260 "max_cntlid": 65519, 00:08:07.260 "namespaces": [ 00:08:07.260 { 00:08:07.260 "nsid": 1, 00:08:07.260 "bdev_name": "Null4", 00:08:07.260 "name": "Null4", 00:08:07.260 "nguid": "ED2CB9E4A15747C5B0B6188FDA76B7AA", 00:08:07.260 "uuid": "ed2cb9e4-a157-47c5-b0b6-188fda76b7aa" 00:08:07.260 } 00:08:07.260 ] 00:08:07.260 } 00:08:07.260 ] 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.260 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.261 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.519 rmmod nvme_tcp 00:08:07.519 rmmod nvme_fabrics 00:08:07.519 rmmod nvme_keyring 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2684608 ']' 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2684608 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 2684608 ']' 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 2684608 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2684608 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2684608' 00:08:07.519 killing process with pid 2684608 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 2684608 00:08:07.519 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 2684608 00:08:07.779 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.779 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.779 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.779 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.779 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.779 04:24:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.779 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.779 04:24:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.687 04:24:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.687 00:08:09.687 real 0m5.299s 00:08:09.687 user 0m4.371s 00:08:09.687 sys 0m1.792s 00:08:09.687 04:24:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.687 04:24:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.687 ************************************ 00:08:09.687 END TEST nvmf_target_discovery 00:08:09.687 ************************************ 00:08:09.687 04:24:29 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:09.687 04:24:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:09.687 04:24:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.687 04:24:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.687 ************************************ 00:08:09.687 START TEST nvmf_referrals 00:08:09.687 ************************************ 00:08:09.687 04:24:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:09.946 * Looking for test storage... 00:08:09.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.946 04:24:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:11.852 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:11.852 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.852 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:11.853 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:11.853 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.853 04:24:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:11.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:08:11.853 00:08:11.853 --- 10.0.0.2 ping statistics --- 00:08:11.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.853 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:08:11.853 00:08:11.853 --- 10.0.0.1 ping statistics --- 00:08:11.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.853 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.853 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2686695 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2686695 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 2686695 ']' 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:12.113 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.113 [2024-07-14 04:24:32.105523] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:12.113 [2024-07-14 04:24:32.105626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.113 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.113 [2024-07-14 04:24:32.175929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.113 [2024-07-14 04:24:32.270381] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.113 [2024-07-14 04:24:32.270441] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.113 [2024-07-14 04:24:32.270469] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.113 [2024-07-14 04:24:32.270484] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.113 [2024-07-14 04:24:32.270497] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.113 [2024-07-14 04:24:32.270589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.113 [2024-07-14 04:24:32.270642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.113 [2024-07-14 04:24:32.270698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.113 [2024-07-14 04:24:32.270700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.373 [2024-07-14 04:24:32.425835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.373 [2024-07-14 04:24:32.438078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.373 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.632 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.892 04:24:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.892 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:12.892 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:12.892 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:12.893 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:12.893 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:12.893 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.893 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.153 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:13.153 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.153 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.153 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:13.153 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.153 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.411 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.668 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.927 04:24:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.927 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:13.927 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:13.927 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.927 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.927 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.927 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.927 04:24:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.927 rmmod nvme_tcp 00:08:13.927 rmmod nvme_fabrics 00:08:13.927 rmmod nvme_keyring 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2686695 ']' 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2686695 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 2686695 ']' 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 2686695 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2686695 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2686695' 00:08:13.927 killing process with pid 2686695 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 2686695 00:08:13.927 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 2686695 00:08:14.186 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.186 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.186 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.186 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.186 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.186 04:24:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.186 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.186 04:24:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.725 04:24:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.725 00:08:16.725 real 0m6.548s 00:08:16.725 user 0m9.480s 00:08:16.725 sys 0m2.144s 00:08:16.725 04:24:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.725 04:24:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.725 ************************************ 00:08:16.725 END TEST nvmf_referrals 00:08:16.725 ************************************ 00:08:16.725 04:24:36 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:16.725 04:24:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:16.725 04:24:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.725 04:24:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.725 ************************************ 00:08:16.725 START TEST nvmf_connect_disconnect 00:08:16.725 ************************************ 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:16.725 * Looking for test storage... 00:08:16.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.725 04:24:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.626 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:18.627 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:18.627 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:18.627 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:18.627 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:08:18.627 00:08:18.627 --- 10.0.0.2 ping statistics --- 00:08:18.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.627 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:08:18.627 00:08:18.627 --- 10.0.0.1 ping statistics --- 00:08:18.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.627 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2688995 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2688995 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 2688995 ']' 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:18.627 04:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.627 [2024-07-14 04:24:38.718299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:18.627 [2024-07-14 04:24:38.718385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.627 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.627 [2024-07-14 04:24:38.788404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.886 [2024-07-14 04:24:38.883288] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.886 [2024-07-14 04:24:38.883351] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.886 [2024-07-14 04:24:38.883367] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.886 [2024-07-14 04:24:38.883380] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.886 [2024-07-14 04:24:38.883392] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.886 [2024-07-14 04:24:38.883482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.886 [2024-07-14 04:24:38.883541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.886 [2024-07-14 04:24:38.883594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.886 [2024-07-14 04:24:38.883597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.886 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.887 [2024-07-14 04:24:39.047910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.887 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.145 [2024-07-14 04:24:39.105749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:19.145 04:24:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:21.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.353 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:11.353 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:11.353 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.354 rmmod nvme_tcp 00:12:11.354 rmmod nvme_fabrics 00:12:11.354 rmmod nvme_keyring 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2688995 ']' 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2688995 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2688995 ']' 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 2688995 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2688995 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2688995' 00:12:11.354 killing process with pid 2688995 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 2688995 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 2688995 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.354 04:28:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.892 04:28:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.892 00:12:13.892 real 3m57.075s 00:12:13.892 user 15m3.291s 00:12:13.892 sys 0m34.287s 00:12:13.892 04:28:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:13.892 04:28:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.893 ************************************ 00:12:13.893 END TEST nvmf_connect_disconnect 00:12:13.893 ************************************ 00:12:13.893 04:28:33 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.893 04:28:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:13.893 04:28:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:13.893 04:28:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.893 ************************************ 00:12:13.893 START TEST nvmf_multitarget 00:12:13.893 ************************************ 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.893 * Looking for test storage... 00:12:13.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.893 04:28:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:15.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.801 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:15.802 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:15.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:15.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:12:15.802 00:12:15.802 --- 10.0.0.2 ping statistics --- 00:12:15.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.802 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:12:15.802 00:12:15.802 --- 10.0.0.1 ping statistics --- 00:12:15.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.802 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2720097 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2720097 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 2720097 ']' 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:15.802 04:28:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.802 [2024-07-14 04:28:35.835021] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:15.802 [2024-07-14 04:28:35.835101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.802 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.802 [2024-07-14 04:28:35.911193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.061 [2024-07-14 04:28:36.005901] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.061 [2024-07-14 04:28:36.005959] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.061 [2024-07-14 04:28:36.005976] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.061 [2024-07-14 04:28:36.005989] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.061 [2024-07-14 04:28:36.006001] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.061 [2024-07-14 04:28:36.006057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.061 [2024-07-14 04:28:36.006087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.061 [2024-07-14 04:28:36.006139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.061 [2024-07-14 04:28:36.006143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.628 04:28:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:16.628 04:28:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:16.628 04:28:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:16.628 04:28:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.628 04:28:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:16.628 04:28:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.628 04:28:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:16.628 04:28:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:16.628 04:28:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:16.887 04:28:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:16.887 04:28:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:16.887 "nvmf_tgt_1" 00:12:16.887 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:17.145 "nvmf_tgt_2" 00:12:17.145 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:17.145 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:17.145 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:17.145 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:17.404 true 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:17.404 true 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:17.404 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:17.404 rmmod nvme_tcp 00:12:17.662 rmmod nvme_fabrics 00:12:17.662 rmmod nvme_keyring 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2720097 ']' 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2720097 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 2720097 ']' 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 2720097 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2720097 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2720097' 00:12:17.663 killing process with pid 2720097 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 2720097 00:12:17.663 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 2720097 00:12:17.923 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.923 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.923 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.923 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.923 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.923 04:28:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.923 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.923 04:28:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.835 04:28:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.835 00:12:19.835 real 0m6.356s 00:12:19.835 user 0m9.096s 00:12:19.835 sys 0m1.960s 00:12:19.835 04:28:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.835 04:28:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.835 ************************************ 00:12:19.835 END TEST nvmf_multitarget 00:12:19.835 ************************************ 00:12:19.835 04:28:39 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.835 04:28:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:19.835 04:28:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.835 04:28:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.835 ************************************ 00:12:19.835 START TEST nvmf_rpc 00:12:19.835 ************************************ 00:12:19.838 04:28:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.839 * Looking for test storage... 00:12:19.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.839 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.100 04:28:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:22.004 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:22.004 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:22.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.004 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:22.005 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.005 04:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:22.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:12:22.005 00:12:22.005 --- 10.0.0.2 ping statistics --- 00:12:22.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.005 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:12:22.005 00:12:22.005 --- 10.0.0.1 ping statistics --- 00:12:22.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.005 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2722322 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2722322 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 2722322 ']' 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:22.005 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.265 [2024-07-14 04:28:42.196670] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:22.265 [2024-07-14 04:28:42.196766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.265 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.265 [2024-07-14 04:28:42.265296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.265 [2024-07-14 04:28:42.363306] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.265 [2024-07-14 04:28:42.363376] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.265 [2024-07-14 04:28:42.363393] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.265 [2024-07-14 04:28:42.363406] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.265 [2024-07-14 04:28:42.363418] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.265 [2024-07-14 04:28:42.363485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.265 [2024-07-14 04:28:42.363539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.265 [2024-07-14 04:28:42.363594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.265 [2024-07-14 04:28:42.363597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:22.523 "tick_rate": 2700000000, 00:12:22.523 "poll_groups": [ 00:12:22.523 { 00:12:22.523 "name": "nvmf_tgt_poll_group_000", 00:12:22.523 "admin_qpairs": 0, 00:12:22.523 "io_qpairs": 0, 00:12:22.523 "current_admin_qpairs": 0, 00:12:22.523 "current_io_qpairs": 0, 00:12:22.523 "pending_bdev_io": 0, 00:12:22.523 "completed_nvme_io": 0, 00:12:22.523 "transports": [] 00:12:22.523 }, 00:12:22.523 { 00:12:22.523 "name": "nvmf_tgt_poll_group_001", 00:12:22.523 "admin_qpairs": 0, 00:12:22.523 "io_qpairs": 0, 00:12:22.523 "current_admin_qpairs": 0, 00:12:22.523 "current_io_qpairs": 0, 00:12:22.523 "pending_bdev_io": 0, 00:12:22.523 "completed_nvme_io": 0, 00:12:22.523 "transports": [] 00:12:22.523 }, 00:12:22.523 { 00:12:22.523 "name": "nvmf_tgt_poll_group_002", 00:12:22.523 "admin_qpairs": 0, 00:12:22.523 "io_qpairs": 0, 00:12:22.523 "current_admin_qpairs": 0, 00:12:22.523 "current_io_qpairs": 0, 00:12:22.523 "pending_bdev_io": 0, 00:12:22.523 "completed_nvme_io": 0, 00:12:22.523 "transports": [] 00:12:22.523 }, 00:12:22.523 { 00:12:22.523 "name": "nvmf_tgt_poll_group_003", 00:12:22.523 "admin_qpairs": 0, 00:12:22.523 "io_qpairs": 0, 00:12:22.523 "current_admin_qpairs": 0, 00:12:22.523 "current_io_qpairs": 0, 00:12:22.523 "pending_bdev_io": 0, 00:12:22.523 "completed_nvme_io": 0, 00:12:22.523 "transports": [] 00:12:22.523 } 00:12:22.523 ] 00:12:22.523 }' 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.523 [2024-07-14 04:28:42.617079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.523 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:22.523 "tick_rate": 2700000000, 00:12:22.523 "poll_groups": [ 00:12:22.523 { 00:12:22.523 "name": "nvmf_tgt_poll_group_000", 00:12:22.523 "admin_qpairs": 0, 00:12:22.523 "io_qpairs": 0, 00:12:22.523 "current_admin_qpairs": 0, 00:12:22.523 "current_io_qpairs": 0, 00:12:22.523 "pending_bdev_io": 0, 00:12:22.523 "completed_nvme_io": 0, 00:12:22.523 "transports": [ 00:12:22.523 { 00:12:22.523 "trtype": "TCP" 00:12:22.523 } 00:12:22.523 ] 00:12:22.523 }, 00:12:22.523 { 00:12:22.523 "name": "nvmf_tgt_poll_group_001", 00:12:22.523 "admin_qpairs": 0, 00:12:22.523 "io_qpairs": 0, 00:12:22.523 "current_admin_qpairs": 0, 00:12:22.523 "current_io_qpairs": 0, 00:12:22.523 "pending_bdev_io": 0, 00:12:22.523 "completed_nvme_io": 0, 00:12:22.523 "transports": [ 00:12:22.524 { 00:12:22.524 "trtype": "TCP" 00:12:22.524 } 00:12:22.524 ] 00:12:22.524 }, 00:12:22.524 { 00:12:22.524 "name": "nvmf_tgt_poll_group_002", 00:12:22.524 "admin_qpairs": 0, 00:12:22.524 "io_qpairs": 0, 00:12:22.524 "current_admin_qpairs": 0, 00:12:22.524 "current_io_qpairs": 0, 00:12:22.524 "pending_bdev_io": 0, 00:12:22.524 "completed_nvme_io": 0, 00:12:22.524 "transports": [ 00:12:22.524 { 00:12:22.524 "trtype": "TCP" 00:12:22.524 } 00:12:22.524 ] 00:12:22.524 }, 00:12:22.524 { 00:12:22.524 "name": "nvmf_tgt_poll_group_003", 00:12:22.524 "admin_qpairs": 0, 00:12:22.524 "io_qpairs": 0, 00:12:22.524 "current_admin_qpairs": 0, 00:12:22.524 "current_io_qpairs": 0, 00:12:22.524 "pending_bdev_io": 0, 00:12:22.524 "completed_nvme_io": 0, 00:12:22.524 "transports": [ 00:12:22.524 { 00:12:22.524 "trtype": "TCP" 00:12:22.524 } 00:12:22.524 ] 00:12:22.524 } 00:12:22.524 ] 00:12:22.524 }' 00:12:22.524 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:22.524 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:22.524 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:22.524 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.524 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:22.524 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:22.524 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:22.524 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:22.524 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.784 Malloc1 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.784 [2024-07-14 04:28:42.782951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:22.784 [2024-07-14 04:28:42.801449] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:22.784 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:22.784 could not add new controller: failed to write to nvme-fabrics device 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:22.784 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:22.785 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:22.785 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.785 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.785 04:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.785 04:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.351 04:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.351 04:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:23.351 04:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.351 04:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:23.351 04:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.885 [2024-07-14 04:28:45.652518] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:25.885 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:25.885 could not add new controller: failed to write to nvme-fabrics device 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.885 04:28:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.477 04:28:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.477 04:28:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:26.477 04:28:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.477 04:28:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:26.477 04:28:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.385 [2024-07-14 04:28:48.503028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.385 04:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.952 04:28:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.952 04:28:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:28.952 04:28:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.952 04:28:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:28.952 04:28:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.488 [2024-07-14 04:28:51.290669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.488 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.056 04:28:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.056 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:32.056 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.056 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:32.056 04:28:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:33.965 04:28:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:33.965 04:28:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:33.965 04:28:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.965 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:33.965 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.965 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:33.965 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.965 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.965 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:33.965 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:33.965 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.965 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.966 [2024-07-14 04:28:54.126294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.966 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.915 04:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.915 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:34.915 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.915 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:34.915 04:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:36.816 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.817 [2024-07-14 04:28:56.916741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.817 04:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.384 04:28:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.384 04:28:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:37.384 04:28:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.384 04:28:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:37.384 04:28:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.919 [2024-07-14 04:28:59.705222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.919 04:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.486 04:29:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.486 04:29:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:40.486 04:29:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.486 04:29:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:40.486 04:29:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.395 [2024-07-14 04:29:02.579503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.395 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 [2024-07-14 04:29:02.627598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 [2024-07-14 04:29:02.675750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 [2024-07-14 04:29:02.723943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 [2024-07-14 04:29:02.772096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.655 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:42.655 "tick_rate": 2700000000, 00:12:42.655 "poll_groups": [ 00:12:42.655 { 00:12:42.655 "name": "nvmf_tgt_poll_group_000", 00:12:42.655 "admin_qpairs": 2, 00:12:42.655 "io_qpairs": 84, 00:12:42.655 "current_admin_qpairs": 0, 00:12:42.655 "current_io_qpairs": 0, 00:12:42.655 "pending_bdev_io": 0, 00:12:42.655 "completed_nvme_io": 184, 00:12:42.655 "transports": [ 00:12:42.655 { 00:12:42.655 "trtype": "TCP" 00:12:42.655 } 00:12:42.655 ] 00:12:42.655 }, 00:12:42.655 { 00:12:42.655 "name": "nvmf_tgt_poll_group_001", 00:12:42.655 "admin_qpairs": 2, 00:12:42.655 "io_qpairs": 84, 00:12:42.656 "current_admin_qpairs": 0, 00:12:42.656 "current_io_qpairs": 0, 00:12:42.656 "pending_bdev_io": 0, 00:12:42.656 "completed_nvme_io": 184, 00:12:42.656 "transports": [ 00:12:42.656 { 00:12:42.656 "trtype": "TCP" 00:12:42.656 } 00:12:42.656 ] 00:12:42.656 }, 00:12:42.656 { 00:12:42.656 "name": "nvmf_tgt_poll_group_002", 00:12:42.656 "admin_qpairs": 1, 00:12:42.656 "io_qpairs": 84, 00:12:42.656 "current_admin_qpairs": 0, 00:12:42.656 "current_io_qpairs": 0, 00:12:42.656 "pending_bdev_io": 0, 00:12:42.656 "completed_nvme_io": 183, 00:12:42.656 "transports": [ 00:12:42.656 { 00:12:42.656 "trtype": "TCP" 00:12:42.656 } 00:12:42.656 ] 00:12:42.656 }, 00:12:42.656 { 00:12:42.656 "name": "nvmf_tgt_poll_group_003", 00:12:42.656 "admin_qpairs": 2, 00:12:42.656 "io_qpairs": 84, 00:12:42.656 "current_admin_qpairs": 0, 00:12:42.656 "current_io_qpairs": 0, 00:12:42.656 "pending_bdev_io": 0, 00:12:42.656 "completed_nvme_io": 135, 00:12:42.656 "transports": [ 00:12:42.656 { 00:12:42.656 "trtype": "TCP" 00:12:42.656 } 00:12:42.656 ] 00:12:42.656 } 00:12:42.656 ] 00:12:42.656 }' 00:12:42.656 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:42.656 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:42.656 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:42.656 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.914 rmmod nvme_tcp 00:12:42.914 rmmod nvme_fabrics 00:12:42.914 rmmod nvme_keyring 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2722322 ']' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2722322 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 2722322 ']' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 2722322 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2722322 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:42.914 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:42.915 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2722322' 00:12:42.915 killing process with pid 2722322 00:12:42.915 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 2722322 00:12:42.915 04:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 2722322 00:12:43.173 04:29:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.173 04:29:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.173 04:29:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.173 04:29:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.173 04:29:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.173 04:29:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.173 04:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.173 04:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.136 04:29:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.136 00:12:45.136 real 0m25.309s 00:12:45.136 user 1m22.588s 00:12:45.136 sys 0m4.098s 00:12:45.136 04:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.136 04:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.136 ************************************ 00:12:45.136 END TEST nvmf_rpc 00:12:45.136 ************************************ 00:12:45.136 04:29:05 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:45.136 04:29:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:45.136 04:29:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.136 04:29:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.136 ************************************ 00:12:45.136 START TEST nvmf_invalid 00:12:45.136 ************************************ 00:12:45.136 04:29:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:45.395 * Looking for test storage... 00:12:45.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.395 04:29:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:47.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:47.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:47.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.302 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:47.303 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:12:47.303 00:12:47.303 --- 10.0.0.2 ping statistics --- 00:12:47.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.303 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:12:47.303 00:12:47.303 --- 10.0.0.1 ping statistics --- 00:12:47.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.303 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2726812 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2726812 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 2726812 ']' 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:47.303 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:47.563 [2024-07-14 04:29:07.537817] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:47.563 [2024-07-14 04:29:07.537921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.563 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.563 [2024-07-14 04:29:07.607398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.563 [2024-07-14 04:29:07.702546] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.563 [2024-07-14 04:29:07.702607] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.563 [2024-07-14 04:29:07.702632] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.563 [2024-07-14 04:29:07.702646] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.563 [2024-07-14 04:29:07.702657] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.563 [2024-07-14 04:29:07.702730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.563 [2024-07-14 04:29:07.702783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.563 [2024-07-14 04:29:07.702833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.563 [2024-07-14 04:29:07.702836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.821 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:47.821 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:47.821 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.821 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.821 04:29:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:47.821 04:29:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.821 04:29:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:47.821 04:29:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12281 00:12:48.078 [2024-07-14 04:29:08.140678] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:48.078 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:48.078 { 00:12:48.078 "nqn": "nqn.2016-06.io.spdk:cnode12281", 00:12:48.078 "tgt_name": "foobar", 00:12:48.078 "method": "nvmf_create_subsystem", 00:12:48.078 "req_id": 1 00:12:48.078 } 00:12:48.078 Got JSON-RPC error response 00:12:48.078 response: 00:12:48.078 { 00:12:48.078 "code": -32603, 00:12:48.078 "message": "Unable to find target foobar" 00:12:48.078 }' 00:12:48.078 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:48.078 { 00:12:48.078 "nqn": "nqn.2016-06.io.spdk:cnode12281", 00:12:48.078 "tgt_name": "foobar", 00:12:48.078 "method": "nvmf_create_subsystem", 00:12:48.078 "req_id": 1 00:12:48.078 } 00:12:48.078 Got JSON-RPC error response 00:12:48.078 response: 00:12:48.078 { 00:12:48.078 "code": -32603, 00:12:48.078 "message": "Unable to find target foobar" 00:12:48.078 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:48.078 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:48.078 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6484 00:12:48.336 [2024-07-14 04:29:08.433657] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6484: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:48.336 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:48.336 { 00:12:48.336 "nqn": "nqn.2016-06.io.spdk:cnode6484", 00:12:48.336 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:48.336 "method": "nvmf_create_subsystem", 00:12:48.336 "req_id": 1 00:12:48.336 } 00:12:48.336 Got JSON-RPC error response 00:12:48.336 response: 00:12:48.336 { 00:12:48.336 "code": -32602, 00:12:48.336 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:48.336 }' 00:12:48.336 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:48.336 { 00:12:48.336 "nqn": "nqn.2016-06.io.spdk:cnode6484", 00:12:48.336 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:48.336 "method": "nvmf_create_subsystem", 00:12:48.336 "req_id": 1 00:12:48.336 } 00:12:48.336 Got JSON-RPC error response 00:12:48.336 response: 00:12:48.336 { 00:12:48.336 "code": -32602, 00:12:48.336 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:48.336 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:48.336 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:48.336 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26276 00:12:48.594 [2024-07-14 04:29:08.678508] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26276: invalid model number 'SPDK_Controller' 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:48.594 { 00:12:48.594 "nqn": "nqn.2016-06.io.spdk:cnode26276", 00:12:48.594 "model_number": "SPDK_Controller\u001f", 00:12:48.594 "method": "nvmf_create_subsystem", 00:12:48.594 "req_id": 1 00:12:48.594 } 00:12:48.594 Got JSON-RPC error response 00:12:48.594 response: 00:12:48.594 { 00:12:48.594 "code": -32602, 00:12:48.594 "message": "Invalid MN SPDK_Controller\u001f" 00:12:48.594 }' 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:48.594 { 00:12:48.594 "nqn": "nqn.2016-06.io.spdk:cnode26276", 00:12:48.594 "model_number": "SPDK_Controller\u001f", 00:12:48.594 "method": "nvmf_create_subsystem", 00:12:48.594 "req_id": 1 00:12:48.594 } 00:12:48.594 Got JSON-RPC error response 00:12:48.594 response: 00:12:48.594 { 00:12:48.594 "code": -32602, 00:12:48.594 "message": "Invalid MN SPDK_Controller\u001f" 00:12:48.594 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:48.594 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ' I>CQyNxtKp[-ao41VtW' 00:12:48.595 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ' I>CQyNxtKp[-ao41VtW' nqn.2016-06.io.spdk:cnode9938 00:12:48.854 [2024-07-14 04:29:08.979519] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9938: invalid serial number ' I>CQyNxtKp[-ao41VtW' 00:12:48.854 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:48.854 { 00:12:48.854 "nqn": "nqn.2016-06.io.spdk:cnode9938", 00:12:48.854 "serial_number": " I>CQyNxtKp[-ao41V\u007ftW", 00:12:48.854 "method": "nvmf_create_subsystem", 00:12:48.854 "req_id": 1 00:12:48.854 } 00:12:48.854 Got JSON-RPC error response 00:12:48.854 response: 00:12:48.854 { 00:12:48.854 "code": -32602, 00:12:48.854 "message": "Invalid SN I>CQyNxtKp[-ao41V\u007ftW" 00:12:48.854 }' 00:12:48.854 04:29:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:48.854 { 00:12:48.854 "nqn": "nqn.2016-06.io.spdk:cnode9938", 00:12:48.854 "serial_number": " I>CQyNxtKp[-ao41V\u007ftW", 00:12:48.854 "method": "nvmf_create_subsystem", 00:12:48.854 "req_id": 1 00:12:48.854 } 00:12:48.854 Got JSON-RPC error response 00:12:48.854 response: 00:12:48.854 { 00:12:48.854 "code": -32602, 00:12:48.854 "message": "Invalid SN I>CQyNxtKp[-ao41V\u007ftW" 00:12:48.854 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.854 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:49.112 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:12:49.113 04:29:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '*4Cco[1c*RGHlR% /dev/null' 00:12:51.698 04:29:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.230 04:29:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.230 00:12:54.230 real 0m8.528s 00:12:54.230 user 0m19.943s 00:12:54.230 sys 0m2.343s 00:12:54.230 04:29:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:54.230 04:29:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.230 ************************************ 00:12:54.230 END TEST nvmf_invalid 00:12:54.230 ************************************ 00:12:54.230 04:29:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:54.230 04:29:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:54.230 04:29:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:54.230 04:29:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.230 ************************************ 00:12:54.230 START TEST nvmf_abort 00:12:54.230 ************************************ 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:54.230 * Looking for test storage... 00:12:54.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.230 04:29:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:56.138 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:56.138 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:56.138 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:56.138 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.138 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:56.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:12:56.138 00:12:56.138 --- 10.0.0.2 ping statistics --- 00:12:56.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.139 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:12:56.139 04:29:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:56.139 00:12:56.139 --- 10.0.0.1 ping statistics --- 00:12:56.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.139 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2729439 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2729439 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 2729439 ']' 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.139 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.139 [2024-07-14 04:29:16.081582] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:56.139 [2024-07-14 04:29:16.081675] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.139 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.139 [2024-07-14 04:29:16.161534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.139 [2024-07-14 04:29:16.259697] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.139 [2024-07-14 04:29:16.259766] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.139 [2024-07-14 04:29:16.259782] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.139 [2024-07-14 04:29:16.259796] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.139 [2024-07-14 04:29:16.259808] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.139 [2024-07-14 04:29:16.261892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.139 [2024-07-14 04:29:16.261951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.139 [2024-07-14 04:29:16.261955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.399 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:56.399 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:56.399 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:56.399 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.399 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.399 04:29:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.399 04:29:16 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:56.399 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.400 [2024-07-14 04:29:16.411434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.400 Malloc0 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.400 Delay0 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.400 [2024-07-14 04:29:16.489074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.400 04:29:16 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:56.400 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.658 [2024-07-14 04:29:16.636055] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:58.565 Initializing NVMe Controllers 00:12:58.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:58.565 controller IO queue size 128 less than required 00:12:58.565 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:58.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:58.565 Initialization complete. Launching workers. 00:12:58.565 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 32009 00:12:58.565 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32073, failed to submit 62 00:12:58.565 success 32013, unsuccess 60, failed 0 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.565 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.565 rmmod nvme_tcp 00:12:58.826 rmmod nvme_fabrics 00:12:58.826 rmmod nvme_keyring 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2729439 ']' 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2729439 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 2729439 ']' 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 2729439 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2729439 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2729439' 00:12:58.826 killing process with pid 2729439 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 2729439 00:12:58.826 04:29:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 2729439 00:12:59.085 04:29:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.085 04:29:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.085 04:29:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.085 04:29:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.085 04:29:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.085 04:29:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.085 04:29:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.085 04:29:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.033 04:29:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:01.033 00:13:01.033 real 0m7.246s 00:13:01.033 user 0m10.599s 00:13:01.033 sys 0m2.580s 00:13:01.033 04:29:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.033 04:29:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:01.033 ************************************ 00:13:01.033 END TEST nvmf_abort 00:13:01.033 ************************************ 00:13:01.033 04:29:21 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:01.033 04:29:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:01.033 04:29:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:01.033 04:29:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:01.033 ************************************ 00:13:01.033 START TEST nvmf_ns_hotplug_stress 00:13:01.033 ************************************ 00:13:01.033 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:01.292 * Looking for test storage... 00:13:01.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.292 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.293 04:29:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:03.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:03.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:03.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:03.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.198 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:13:03.457 00:13:03.457 --- 10.0.0.2 ping statistics --- 00:13:03.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.457 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:13:03.457 00:13:03.457 --- 10.0.0.1 ping statistics --- 00:13:03.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.457 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2731658 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2731658 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 2731658 ']' 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:03.457 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.457 [2024-07-14 04:29:23.477897] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:03.457 [2024-07-14 04:29:23.477991] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.457 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.457 [2024-07-14 04:29:23.558190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.715 [2024-07-14 04:29:23.656622] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.715 [2024-07-14 04:29:23.656689] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.715 [2024-07-14 04:29:23.656706] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.715 [2024-07-14 04:29:23.656719] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.715 [2024-07-14 04:29:23.656730] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.715 [2024-07-14 04:29:23.656804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.715 [2024-07-14 04:29:23.656876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.715 [2024-07-14 04:29:23.656878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.715 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.715 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:03.715 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.715 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.715 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.715 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.715 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:03.715 04:29:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:03.973 [2024-07-14 04:29:24.016820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.973 04:29:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:04.230 04:29:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.488 [2024-07-14 04:29:24.599802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.488 04:29:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:04.746 04:29:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:05.004 Malloc0 00:13:05.004 04:29:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:05.262 Delay0 00:13:05.262 04:29:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.520 04:29:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:06.086 NULL1 00:13:06.086 04:29:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:06.086 04:29:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2732079 00:13:06.086 04:29:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:06.086 04:29:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:06.086 04:29:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.086 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.343 04:29:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.602 04:29:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:06.602 04:29:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:06.860 true 00:13:06.860 04:29:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:06.860 04:29:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.119 04:29:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.377 04:29:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:07.377 04:29:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:07.634 true 00:13:07.634 04:29:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:07.634 04:29:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.568 Read completed with error (sct=0, sc=11) 00:13:08.568 04:29:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.568 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.826 04:29:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:08.826 04:29:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:09.083 true 00:13:09.083 04:29:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:09.083 04:29:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.341 04:29:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.599 04:29:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:09.599 04:29:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:09.857 true 00:13:09.857 04:29:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:09.857 04:29:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.123 04:29:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.383 04:29:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:10.383 04:29:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:10.640 true 00:13:10.640 04:29:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:10.640 04:29:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.575 04:29:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.832 04:29:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:11.832 04:29:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:12.090 true 00:13:12.090 04:29:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:12.090 04:29:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.347 04:29:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.604 04:29:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:12.604 04:29:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:12.862 true 00:13:12.862 04:29:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:12.862 04:29:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.798 04:29:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.058 04:29:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:14.058 04:29:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:14.058 true 00:13:14.322 04:29:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:14.322 04:29:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.322 04:29:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.628 04:29:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:14.628 04:29:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:14.886 true 00:13:14.886 04:29:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:14.886 04:29:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.826 04:29:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.084 04:29:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:16.084 04:29:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:16.343 true 00:13:16.343 04:29:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:16.343 04:29:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.601 04:29:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.859 04:29:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:16.859 04:29:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:17.117 true 00:13:17.117 04:29:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:17.117 04:29:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.054 04:29:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.054 04:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:18.054 04:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:18.320 true 00:13:18.320 04:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:18.320 04:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.577 04:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.833 04:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:18.833 04:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:19.090 true 00:13:19.090 04:29:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:19.091 04:29:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.028 04:29:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.286 04:29:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:20.286 04:29:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:20.544 true 00:13:20.544 04:29:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:20.544 04:29:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.803 04:29:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.061 04:29:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:21.061 04:29:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:21.320 true 00:13:21.320 04:29:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:21.320 04:29:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.255 04:29:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.514 04:29:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:22.514 04:29:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:22.514 true 00:13:22.773 04:29:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:22.773 04:29:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.773 04:29:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.031 04:29:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:23.031 04:29:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:23.303 true 00:13:23.303 04:29:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:23.303 04:29:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.240 04:29:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.498 04:29:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:24.498 04:29:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:24.756 true 00:13:24.756 04:29:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:24.756 04:29:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.014 04:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.272 04:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:25.272 04:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:25.530 true 00:13:25.530 04:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:25.530 04:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.788 04:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.046 04:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:26.046 04:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:26.305 true 00:13:26.305 04:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:26.305 04:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.240 04:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.498 04:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:27.498 04:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:27.756 true 00:13:27.756 04:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:27.756 04:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.724 04:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.981 04:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:28.981 04:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:28.981 true 00:13:28.981 04:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:28.981 04:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.238 04:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.495 04:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:29.495 04:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:29.752 true 00:13:29.752 04:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:29.752 04:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.688 04:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.946 04:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:30.946 04:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:31.204 true 00:13:31.204 04:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:31.204 04:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.461 04:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.718 04:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:31.718 04:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:31.976 true 00:13:31.976 04:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:31.976 04:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.233 04:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.490 04:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:32.490 04:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:32.748 true 00:13:32.748 04:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:32.748 04:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.705 04:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.962 04:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:33.962 04:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:34.220 true 00:13:34.479 04:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:34.479 04:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.046 04:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.304 04:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:35.304 04:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:35.562 true 00:13:35.562 04:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:35.562 04:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.820 04:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.077 04:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:36.077 04:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:36.334 true 00:13:36.334 04:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:36.334 04:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.269 Initializing NVMe Controllers 00:13:37.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.269 Controller IO queue size 128, less than required. 00:13:37.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.269 Controller IO queue size 128, less than required. 00:13:37.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:37.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:37.269 Initialization complete. Launching workers. 00:13:37.269 ======================================================== 00:13:37.269 Latency(us) 00:13:37.269 Device Information : IOPS MiB/s Average min max 00:13:37.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 894.67 0.44 74831.04 2611.64 1012434.59 00:13:37.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10774.09 5.26 11845.37 2882.60 447331.09 00:13:37.269 ======================================================== 00:13:37.269 Total : 11668.75 5.70 16674.60 2611.64 1012434.59 00:13:37.269 00:13:37.269 04:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.529 04:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:37.529 04:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:37.529 true 00:13:37.788 04:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2732079 00:13:37.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2732079) - No such process 00:13:37.788 04:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2732079 00:13:37.788 04:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.788 04:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.046 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:38.046 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:38.046 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:38.046 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.046 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:38.304 null0 00:13:38.304 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.304 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.304 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:38.563 null1 00:13:38.563 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.563 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.563 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:38.821 null2 00:13:38.821 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.821 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.821 04:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:39.079 null3 00:13:39.079 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.079 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.079 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:39.337 null4 00:13:39.337 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.337 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.337 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:39.595 null5 00:13:39.595 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.596 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.596 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:39.854 null6 00:13:39.854 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.854 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.854 04:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:40.112 null7 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2736161 2736162 2736164 2736166 2736168 2736170 2736172 2736174 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.112 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.370 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.370 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.370 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.370 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.370 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.370 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.370 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.370 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.629 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.888 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.888 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.888 04:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.888 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.888 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.888 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.888 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.888 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.147 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.433 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.433 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.434 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.434 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.434 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.434 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.434 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.434 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.692 04:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.950 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.950 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.950 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.950 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.950 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.950 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.950 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.950 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.207 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.464 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.464 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.464 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.464 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.464 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.464 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.464 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.464 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.722 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.979 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.979 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.979 04:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.236 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.236 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.236 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.236 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.236 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.236 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.236 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.236 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.493 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.752 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.752 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.752 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.752 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.752 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.752 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.752 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.752 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.010 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.010 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.010 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.010 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.010 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.010 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.010 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.010 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.010 04:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.010 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.268 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.268 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.268 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.268 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.268 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.268 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.268 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.268 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.525 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.784 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.784 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.784 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.784 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.784 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.784 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.784 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.784 04:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.042 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:45.300 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.300 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.301 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:45.301 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:45.301 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.301 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.301 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:45.301 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:45.558 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.558 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.558 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.559 rmmod nvme_tcp 00:13:45.559 rmmod nvme_fabrics 00:13:45.559 rmmod nvme_keyring 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2731658 ']' 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2731658 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 2731658 ']' 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 2731658 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:45.559 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2731658 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2731658' 00:13:45.817 killing process with pid 2731658 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 2731658 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 2731658 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.817 04:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.352 04:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.352 00:13:48.352 real 0m46.841s 00:13:48.352 user 3m33.465s 00:13:48.352 sys 0m16.472s 00:13:48.352 04:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:48.352 04:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.352 ************************************ 00:13:48.352 END TEST nvmf_ns_hotplug_stress 00:13:48.352 ************************************ 00:13:48.352 04:30:08 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:48.352 04:30:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:48.352 04:30:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.352 04:30:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.352 ************************************ 00:13:48.352 START TEST nvmf_connect_stress 00:13:48.352 ************************************ 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:48.352 * Looking for test storage... 00:13:48.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:48.352 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:48.353 04:30:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.353 04:30:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:50.256 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:50.256 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.256 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:50.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:50.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:50.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:13:50.257 00:13:50.257 --- 10.0.0.2 ping statistics --- 00:13:50.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.257 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:13:50.257 00:13:50.257 --- 10.0.0.1 ping statistics --- 00:13:50.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.257 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2739509 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2739509 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 2739509 ']' 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:50.257 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.257 [2024-07-14 04:30:10.333711] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:50.257 [2024-07-14 04:30:10.333786] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.257 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.257 [2024-07-14 04:30:10.396316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.516 [2024-07-14 04:30:10.481033] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.516 [2024-07-14 04:30:10.481085] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.516 [2024-07-14 04:30:10.481114] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.516 [2024-07-14 04:30:10.481126] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.516 [2024-07-14 04:30:10.481137] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.516 [2024-07-14 04:30:10.481274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.516 [2024-07-14 04:30:10.481340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.516 [2024-07-14 04:30:10.481343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.516 [2024-07-14 04:30:10.629610] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.516 [2024-07-14 04:30:10.656008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.516 NULL1 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2739556 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.516 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.774 04:30:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.031 04:30:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.031 04:30:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:51.031 04:30:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.031 04:30:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.031 04:30:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.289 04:30:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.289 04:30:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:51.289 04:30:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.289 04:30:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.289 04:30:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.547 04:30:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.547 04:30:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:51.547 04:30:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.547 04:30:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.547 04:30:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.112 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.112 04:30:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:52.112 04:30:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.112 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.112 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.370 04:30:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:52.370 04:30:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.370 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.370 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.628 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.628 04:30:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:52.628 04:30:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.628 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.628 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.886 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.886 04:30:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:52.886 04:30:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.886 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.886 04:30:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.153 04:30:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.153 04:30:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:53.153 04:30:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.153 04:30:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.153 04:30:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 04:30:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.719 04:30:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:53.719 04:30:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.719 04:30:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.719 04:30:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.976 04:30:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.976 04:30:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:53.976 04:30:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.976 04:30:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.976 04:30:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.233 04:30:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.233 04:30:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:54.233 04:30:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.233 04:30:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.233 04:30:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.490 04:30:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.490 04:30:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:54.490 04:30:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.490 04:30:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.490 04:30:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 04:30:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.747 04:30:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:54.747 04:30:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.747 04:30:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.747 04:30:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.312 04:30:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.312 04:30:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:55.312 04:30:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.312 04:30:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.312 04:30:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.598 04:30:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.598 04:30:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:55.598 04:30:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.598 04:30:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.598 04:30:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.856 04:30:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.856 04:30:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:55.856 04:30:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.856 04:30:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.856 04:30:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.114 04:30:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.114 04:30:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:56.114 04:30:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.114 04:30:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.114 04:30:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.371 04:30:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.371 04:30:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:56.371 04:30:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.371 04:30:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.371 04:30:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.935 04:30:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.935 04:30:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:56.935 04:30:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.935 04:30:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.935 04:30:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.192 04:30:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.192 04:30:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:57.192 04:30:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.192 04:30:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.192 04:30:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.449 04:30:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.449 04:30:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:57.449 04:30:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.449 04:30:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.449 04:30:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.706 04:30:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.706 04:30:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:57.706 04:30:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.706 04:30:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.706 04:30:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.964 04:30:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.964 04:30:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:57.964 04:30:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.964 04:30:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.964 04:30:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.528 04:30:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.528 04:30:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:58.528 04:30:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.528 04:30:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.528 04:30:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.785 04:30:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.785 04:30:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:58.785 04:30:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.785 04:30:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.785 04:30:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.042 04:30:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.042 04:30:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:59.042 04:30:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.042 04:30:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.042 04:30:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.299 04:30:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.299 04:30:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:59.299 04:30:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.299 04:30:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.299 04:30:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.555 04:30:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.555 04:30:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:13:59.555 04:30:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.555 04:30:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.555 04:30:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.119 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.119 04:30:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:14:00.119 04:30:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.119 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.119 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.377 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.377 04:30:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:14:00.377 04:30:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.377 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.377 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.633 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.633 04:30:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:14:00.633 04:30:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.633 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.633 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.633 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:00.954 04:30:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.954 04:30:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2739556 00:14:00.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2739556) - No such process 00:14:00.954 04:30:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2739556 00:14:00.954 04:30:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.954 rmmod nvme_tcp 00:14:00.954 rmmod nvme_fabrics 00:14:00.954 rmmod nvme_keyring 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2739509 ']' 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2739509 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 2739509 ']' 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 2739509 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2739509 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2739509' 00:14:00.954 killing process with pid 2739509 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 2739509 00:14:00.954 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 2739509 00:14:01.211 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.211 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:01.211 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:01.211 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.211 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.211 04:30:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.211 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.211 04:30:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.745 04:30:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:03.745 00:14:03.745 real 0m15.286s 00:14:03.745 user 0m38.126s 00:14:03.745 sys 0m6.064s 00:14:03.745 04:30:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:03.745 04:30:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.745 ************************************ 00:14:03.745 END TEST nvmf_connect_stress 00:14:03.745 ************************************ 00:14:03.745 04:30:23 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:03.745 04:30:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:03.745 04:30:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:03.745 04:30:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.745 ************************************ 00:14:03.745 START TEST nvmf_fused_ordering 00:14:03.745 ************************************ 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:03.745 * Looking for test storage... 00:14:03.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.745 04:30:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:05.659 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:05.659 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:05.659 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:05.659 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:14:05.659 00:14:05.659 --- 10.0.0.2 ping statistics --- 00:14:05.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.659 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:14:05.659 00:14:05.659 --- 10.0.0.1 ping statistics --- 00:14:05.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.659 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:05.659 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2742791 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2742791 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 2742791 ']' 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:05.660 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.660 [2024-07-14 04:30:25.597528] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:05.660 [2024-07-14 04:30:25.597614] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.660 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.660 [2024-07-14 04:30:25.667781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.660 [2024-07-14 04:30:25.757073] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.660 [2024-07-14 04:30:25.757138] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.660 [2024-07-14 04:30:25.757156] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.660 [2024-07-14 04:30:25.757169] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.660 [2024-07-14 04:30:25.757181] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.660 [2024-07-14 04:30:25.757213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.917 [2024-07-14 04:30:25.891996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.917 [2024-07-14 04:30:25.908172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.917 NULL1 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.917 04:30:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:05.917 [2024-07-14 04:30:25.951797] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:05.917 [2024-07-14 04:30:25.951839] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742819 ] 00:14:05.917 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.480 Attached to nqn.2016-06.io.spdk:cnode1 00:14:06.480 Namespace ID: 1 size: 1GB 00:14:06.480 fused_ordering(0) 00:14:06.480 fused_ordering(1) 00:14:06.480 fused_ordering(2) 00:14:06.480 fused_ordering(3) 00:14:06.480 fused_ordering(4) 00:14:06.480 fused_ordering(5) 00:14:06.480 fused_ordering(6) 00:14:06.480 fused_ordering(7) 00:14:06.480 fused_ordering(8) 00:14:06.480 fused_ordering(9) 00:14:06.480 fused_ordering(10) 00:14:06.480 fused_ordering(11) 00:14:06.480 fused_ordering(12) 00:14:06.480 fused_ordering(13) 00:14:06.480 fused_ordering(14) 00:14:06.480 fused_ordering(15) 00:14:06.480 fused_ordering(16) 00:14:06.480 fused_ordering(17) 00:14:06.480 fused_ordering(18) 00:14:06.480 fused_ordering(19) 00:14:06.480 fused_ordering(20) 00:14:06.480 fused_ordering(21) 00:14:06.480 fused_ordering(22) 00:14:06.480 fused_ordering(23) 00:14:06.480 fused_ordering(24) 00:14:06.480 fused_ordering(25) 00:14:06.480 fused_ordering(26) 00:14:06.480 fused_ordering(27) 00:14:06.480 fused_ordering(28) 00:14:06.480 fused_ordering(29) 00:14:06.480 fused_ordering(30) 00:14:06.480 fused_ordering(31) 00:14:06.480 fused_ordering(32) 00:14:06.480 fused_ordering(33) 00:14:06.480 fused_ordering(34) 00:14:06.480 fused_ordering(35) 00:14:06.480 fused_ordering(36) 00:14:06.480 fused_ordering(37) 00:14:06.480 fused_ordering(38) 00:14:06.480 fused_ordering(39) 00:14:06.480 fused_ordering(40) 00:14:06.480 fused_ordering(41) 00:14:06.480 fused_ordering(42) 00:14:06.480 fused_ordering(43) 00:14:06.480 fused_ordering(44) 00:14:06.480 fused_ordering(45) 00:14:06.480 fused_ordering(46) 00:14:06.480 fused_ordering(47) 00:14:06.480 fused_ordering(48) 00:14:06.480 fused_ordering(49) 00:14:06.480 fused_ordering(50) 00:14:06.480 fused_ordering(51) 00:14:06.480 fused_ordering(52) 00:14:06.480 fused_ordering(53) 00:14:06.480 fused_ordering(54) 00:14:06.480 fused_ordering(55) 00:14:06.480 fused_ordering(56) 00:14:06.480 fused_ordering(57) 00:14:06.480 fused_ordering(58) 00:14:06.480 fused_ordering(59) 00:14:06.480 fused_ordering(60) 00:14:06.480 fused_ordering(61) 00:14:06.480 fused_ordering(62) 00:14:06.480 fused_ordering(63) 00:14:06.480 fused_ordering(64) 00:14:06.480 fused_ordering(65) 00:14:06.480 fused_ordering(66) 00:14:06.480 fused_ordering(67) 00:14:06.480 fused_ordering(68) 00:14:06.480 fused_ordering(69) 00:14:06.480 fused_ordering(70) 00:14:06.480 fused_ordering(71) 00:14:06.480 fused_ordering(72) 00:14:06.480 fused_ordering(73) 00:14:06.480 fused_ordering(74) 00:14:06.480 fused_ordering(75) 00:14:06.480 fused_ordering(76) 00:14:06.480 fused_ordering(77) 00:14:06.480 fused_ordering(78) 00:14:06.480 fused_ordering(79) 00:14:06.480 fused_ordering(80) 00:14:06.480 fused_ordering(81) 00:14:06.480 fused_ordering(82) 00:14:06.480 fused_ordering(83) 00:14:06.480 fused_ordering(84) 00:14:06.481 fused_ordering(85) 00:14:06.481 fused_ordering(86) 00:14:06.481 fused_ordering(87) 00:14:06.481 fused_ordering(88) 00:14:06.481 fused_ordering(89) 00:14:06.481 fused_ordering(90) 00:14:06.481 fused_ordering(91) 00:14:06.481 fused_ordering(92) 00:14:06.481 fused_ordering(93) 00:14:06.481 fused_ordering(94) 00:14:06.481 fused_ordering(95) 00:14:06.481 fused_ordering(96) 00:14:06.481 fused_ordering(97) 00:14:06.481 fused_ordering(98) 00:14:06.481 fused_ordering(99) 00:14:06.481 fused_ordering(100) 00:14:06.481 fused_ordering(101) 00:14:06.481 fused_ordering(102) 00:14:06.481 fused_ordering(103) 00:14:06.481 fused_ordering(104) 00:14:06.481 fused_ordering(105) 00:14:06.481 fused_ordering(106) 00:14:06.481 fused_ordering(107) 00:14:06.481 fused_ordering(108) 00:14:06.481 fused_ordering(109) 00:14:06.481 fused_ordering(110) 00:14:06.481 fused_ordering(111) 00:14:06.481 fused_ordering(112) 00:14:06.481 fused_ordering(113) 00:14:06.481 fused_ordering(114) 00:14:06.481 fused_ordering(115) 00:14:06.481 fused_ordering(116) 00:14:06.481 fused_ordering(117) 00:14:06.481 fused_ordering(118) 00:14:06.481 fused_ordering(119) 00:14:06.481 fused_ordering(120) 00:14:06.481 fused_ordering(121) 00:14:06.481 fused_ordering(122) 00:14:06.481 fused_ordering(123) 00:14:06.481 fused_ordering(124) 00:14:06.481 fused_ordering(125) 00:14:06.481 fused_ordering(126) 00:14:06.481 fused_ordering(127) 00:14:06.481 fused_ordering(128) 00:14:06.481 fused_ordering(129) 00:14:06.481 fused_ordering(130) 00:14:06.481 fused_ordering(131) 00:14:06.481 fused_ordering(132) 00:14:06.481 fused_ordering(133) 00:14:06.481 fused_ordering(134) 00:14:06.481 fused_ordering(135) 00:14:06.481 fused_ordering(136) 00:14:06.481 fused_ordering(137) 00:14:06.481 fused_ordering(138) 00:14:06.481 fused_ordering(139) 00:14:06.481 fused_ordering(140) 00:14:06.481 fused_ordering(141) 00:14:06.481 fused_ordering(142) 00:14:06.481 fused_ordering(143) 00:14:06.481 fused_ordering(144) 00:14:06.481 fused_ordering(145) 00:14:06.481 fused_ordering(146) 00:14:06.481 fused_ordering(147) 00:14:06.481 fused_ordering(148) 00:14:06.481 fused_ordering(149) 00:14:06.481 fused_ordering(150) 00:14:06.481 fused_ordering(151) 00:14:06.481 fused_ordering(152) 00:14:06.481 fused_ordering(153) 00:14:06.481 fused_ordering(154) 00:14:06.481 fused_ordering(155) 00:14:06.481 fused_ordering(156) 00:14:06.481 fused_ordering(157) 00:14:06.481 fused_ordering(158) 00:14:06.481 fused_ordering(159) 00:14:06.481 fused_ordering(160) 00:14:06.481 fused_ordering(161) 00:14:06.481 fused_ordering(162) 00:14:06.481 fused_ordering(163) 00:14:06.481 fused_ordering(164) 00:14:06.481 fused_ordering(165) 00:14:06.481 fused_ordering(166) 00:14:06.481 fused_ordering(167) 00:14:06.481 fused_ordering(168) 00:14:06.481 fused_ordering(169) 00:14:06.481 fused_ordering(170) 00:14:06.481 fused_ordering(171) 00:14:06.481 fused_ordering(172) 00:14:06.481 fused_ordering(173) 00:14:06.481 fused_ordering(174) 00:14:06.481 fused_ordering(175) 00:14:06.481 fused_ordering(176) 00:14:06.481 fused_ordering(177) 00:14:06.481 fused_ordering(178) 00:14:06.481 fused_ordering(179) 00:14:06.481 fused_ordering(180) 00:14:06.481 fused_ordering(181) 00:14:06.481 fused_ordering(182) 00:14:06.481 fused_ordering(183) 00:14:06.481 fused_ordering(184) 00:14:06.481 fused_ordering(185) 00:14:06.481 fused_ordering(186) 00:14:06.481 fused_ordering(187) 00:14:06.481 fused_ordering(188) 00:14:06.481 fused_ordering(189) 00:14:06.481 fused_ordering(190) 00:14:06.481 fused_ordering(191) 00:14:06.481 fused_ordering(192) 00:14:06.481 fused_ordering(193) 00:14:06.481 fused_ordering(194) 00:14:06.481 fused_ordering(195) 00:14:06.481 fused_ordering(196) 00:14:06.481 fused_ordering(197) 00:14:06.481 fused_ordering(198) 00:14:06.481 fused_ordering(199) 00:14:06.481 fused_ordering(200) 00:14:06.481 fused_ordering(201) 00:14:06.481 fused_ordering(202) 00:14:06.481 fused_ordering(203) 00:14:06.481 fused_ordering(204) 00:14:06.481 fused_ordering(205) 00:14:07.045 fused_ordering(206) 00:14:07.045 fused_ordering(207) 00:14:07.045 fused_ordering(208) 00:14:07.045 fused_ordering(209) 00:14:07.045 fused_ordering(210) 00:14:07.045 fused_ordering(211) 00:14:07.045 fused_ordering(212) 00:14:07.045 fused_ordering(213) 00:14:07.045 fused_ordering(214) 00:14:07.045 fused_ordering(215) 00:14:07.045 fused_ordering(216) 00:14:07.045 fused_ordering(217) 00:14:07.045 fused_ordering(218) 00:14:07.045 fused_ordering(219) 00:14:07.045 fused_ordering(220) 00:14:07.045 fused_ordering(221) 00:14:07.045 fused_ordering(222) 00:14:07.045 fused_ordering(223) 00:14:07.045 fused_ordering(224) 00:14:07.045 fused_ordering(225) 00:14:07.045 fused_ordering(226) 00:14:07.045 fused_ordering(227) 00:14:07.045 fused_ordering(228) 00:14:07.045 fused_ordering(229) 00:14:07.045 fused_ordering(230) 00:14:07.045 fused_ordering(231) 00:14:07.045 fused_ordering(232) 00:14:07.045 fused_ordering(233) 00:14:07.045 fused_ordering(234) 00:14:07.045 fused_ordering(235) 00:14:07.045 fused_ordering(236) 00:14:07.045 fused_ordering(237) 00:14:07.045 fused_ordering(238) 00:14:07.045 fused_ordering(239) 00:14:07.045 fused_ordering(240) 00:14:07.045 fused_ordering(241) 00:14:07.045 fused_ordering(242) 00:14:07.045 fused_ordering(243) 00:14:07.045 fused_ordering(244) 00:14:07.045 fused_ordering(245) 00:14:07.045 fused_ordering(246) 00:14:07.045 fused_ordering(247) 00:14:07.045 fused_ordering(248) 00:14:07.045 fused_ordering(249) 00:14:07.045 fused_ordering(250) 00:14:07.045 fused_ordering(251) 00:14:07.045 fused_ordering(252) 00:14:07.045 fused_ordering(253) 00:14:07.045 fused_ordering(254) 00:14:07.045 fused_ordering(255) 00:14:07.045 fused_ordering(256) 00:14:07.045 fused_ordering(257) 00:14:07.045 fused_ordering(258) 00:14:07.045 fused_ordering(259) 00:14:07.045 fused_ordering(260) 00:14:07.045 fused_ordering(261) 00:14:07.045 fused_ordering(262) 00:14:07.045 fused_ordering(263) 00:14:07.045 fused_ordering(264) 00:14:07.045 fused_ordering(265) 00:14:07.045 fused_ordering(266) 00:14:07.045 fused_ordering(267) 00:14:07.045 fused_ordering(268) 00:14:07.045 fused_ordering(269) 00:14:07.045 fused_ordering(270) 00:14:07.045 fused_ordering(271) 00:14:07.045 fused_ordering(272) 00:14:07.045 fused_ordering(273) 00:14:07.045 fused_ordering(274) 00:14:07.045 fused_ordering(275) 00:14:07.045 fused_ordering(276) 00:14:07.045 fused_ordering(277) 00:14:07.045 fused_ordering(278) 00:14:07.045 fused_ordering(279) 00:14:07.045 fused_ordering(280) 00:14:07.045 fused_ordering(281) 00:14:07.045 fused_ordering(282) 00:14:07.045 fused_ordering(283) 00:14:07.045 fused_ordering(284) 00:14:07.045 fused_ordering(285) 00:14:07.045 fused_ordering(286) 00:14:07.045 fused_ordering(287) 00:14:07.045 fused_ordering(288) 00:14:07.045 fused_ordering(289) 00:14:07.045 fused_ordering(290) 00:14:07.045 fused_ordering(291) 00:14:07.045 fused_ordering(292) 00:14:07.045 fused_ordering(293) 00:14:07.045 fused_ordering(294) 00:14:07.045 fused_ordering(295) 00:14:07.045 fused_ordering(296) 00:14:07.045 fused_ordering(297) 00:14:07.045 fused_ordering(298) 00:14:07.045 fused_ordering(299) 00:14:07.045 fused_ordering(300) 00:14:07.045 fused_ordering(301) 00:14:07.045 fused_ordering(302) 00:14:07.045 fused_ordering(303) 00:14:07.045 fused_ordering(304) 00:14:07.045 fused_ordering(305) 00:14:07.045 fused_ordering(306) 00:14:07.045 fused_ordering(307) 00:14:07.045 fused_ordering(308) 00:14:07.045 fused_ordering(309) 00:14:07.045 fused_ordering(310) 00:14:07.045 fused_ordering(311) 00:14:07.045 fused_ordering(312) 00:14:07.045 fused_ordering(313) 00:14:07.045 fused_ordering(314) 00:14:07.045 fused_ordering(315) 00:14:07.045 fused_ordering(316) 00:14:07.045 fused_ordering(317) 00:14:07.045 fused_ordering(318) 00:14:07.045 fused_ordering(319) 00:14:07.045 fused_ordering(320) 00:14:07.045 fused_ordering(321) 00:14:07.045 fused_ordering(322) 00:14:07.045 fused_ordering(323) 00:14:07.045 fused_ordering(324) 00:14:07.045 fused_ordering(325) 00:14:07.045 fused_ordering(326) 00:14:07.045 fused_ordering(327) 00:14:07.045 fused_ordering(328) 00:14:07.045 fused_ordering(329) 00:14:07.045 fused_ordering(330) 00:14:07.045 fused_ordering(331) 00:14:07.045 fused_ordering(332) 00:14:07.045 fused_ordering(333) 00:14:07.045 fused_ordering(334) 00:14:07.045 fused_ordering(335) 00:14:07.045 fused_ordering(336) 00:14:07.045 fused_ordering(337) 00:14:07.045 fused_ordering(338) 00:14:07.045 fused_ordering(339) 00:14:07.045 fused_ordering(340) 00:14:07.045 fused_ordering(341) 00:14:07.045 fused_ordering(342) 00:14:07.045 fused_ordering(343) 00:14:07.045 fused_ordering(344) 00:14:07.045 fused_ordering(345) 00:14:07.045 fused_ordering(346) 00:14:07.045 fused_ordering(347) 00:14:07.045 fused_ordering(348) 00:14:07.045 fused_ordering(349) 00:14:07.045 fused_ordering(350) 00:14:07.045 fused_ordering(351) 00:14:07.045 fused_ordering(352) 00:14:07.045 fused_ordering(353) 00:14:07.045 fused_ordering(354) 00:14:07.045 fused_ordering(355) 00:14:07.045 fused_ordering(356) 00:14:07.045 fused_ordering(357) 00:14:07.045 fused_ordering(358) 00:14:07.045 fused_ordering(359) 00:14:07.045 fused_ordering(360) 00:14:07.045 fused_ordering(361) 00:14:07.045 fused_ordering(362) 00:14:07.045 fused_ordering(363) 00:14:07.045 fused_ordering(364) 00:14:07.045 fused_ordering(365) 00:14:07.045 fused_ordering(366) 00:14:07.045 fused_ordering(367) 00:14:07.046 fused_ordering(368) 00:14:07.046 fused_ordering(369) 00:14:07.046 fused_ordering(370) 00:14:07.046 fused_ordering(371) 00:14:07.046 fused_ordering(372) 00:14:07.046 fused_ordering(373) 00:14:07.046 fused_ordering(374) 00:14:07.046 fused_ordering(375) 00:14:07.046 fused_ordering(376) 00:14:07.046 fused_ordering(377) 00:14:07.046 fused_ordering(378) 00:14:07.046 fused_ordering(379) 00:14:07.046 fused_ordering(380) 00:14:07.046 fused_ordering(381) 00:14:07.046 fused_ordering(382) 00:14:07.046 fused_ordering(383) 00:14:07.046 fused_ordering(384) 00:14:07.046 fused_ordering(385) 00:14:07.046 fused_ordering(386) 00:14:07.046 fused_ordering(387) 00:14:07.046 fused_ordering(388) 00:14:07.046 fused_ordering(389) 00:14:07.046 fused_ordering(390) 00:14:07.046 fused_ordering(391) 00:14:07.046 fused_ordering(392) 00:14:07.046 fused_ordering(393) 00:14:07.046 fused_ordering(394) 00:14:07.046 fused_ordering(395) 00:14:07.046 fused_ordering(396) 00:14:07.046 fused_ordering(397) 00:14:07.046 fused_ordering(398) 00:14:07.046 fused_ordering(399) 00:14:07.046 fused_ordering(400) 00:14:07.046 fused_ordering(401) 00:14:07.046 fused_ordering(402) 00:14:07.046 fused_ordering(403) 00:14:07.046 fused_ordering(404) 00:14:07.046 fused_ordering(405) 00:14:07.046 fused_ordering(406) 00:14:07.046 fused_ordering(407) 00:14:07.046 fused_ordering(408) 00:14:07.046 fused_ordering(409) 00:14:07.046 fused_ordering(410) 00:14:07.978 fused_ordering(411) 00:14:07.978 fused_ordering(412) 00:14:07.978 fused_ordering(413) 00:14:07.978 fused_ordering(414) 00:14:07.978 fused_ordering(415) 00:14:07.978 fused_ordering(416) 00:14:07.978 fused_ordering(417) 00:14:07.978 fused_ordering(418) 00:14:07.978 fused_ordering(419) 00:14:07.978 fused_ordering(420) 00:14:07.978 fused_ordering(421) 00:14:07.978 fused_ordering(422) 00:14:07.978 fused_ordering(423) 00:14:07.978 fused_ordering(424) 00:14:07.978 fused_ordering(425) 00:14:07.978 fused_ordering(426) 00:14:07.978 fused_ordering(427) 00:14:07.978 fused_ordering(428) 00:14:07.978 fused_ordering(429) 00:14:07.978 fused_ordering(430) 00:14:07.978 fused_ordering(431) 00:14:07.978 fused_ordering(432) 00:14:07.978 fused_ordering(433) 00:14:07.978 fused_ordering(434) 00:14:07.978 fused_ordering(435) 00:14:07.978 fused_ordering(436) 00:14:07.978 fused_ordering(437) 00:14:07.978 fused_ordering(438) 00:14:07.978 fused_ordering(439) 00:14:07.978 fused_ordering(440) 00:14:07.978 fused_ordering(441) 00:14:07.978 fused_ordering(442) 00:14:07.978 fused_ordering(443) 00:14:07.978 fused_ordering(444) 00:14:07.978 fused_ordering(445) 00:14:07.978 fused_ordering(446) 00:14:07.978 fused_ordering(447) 00:14:07.978 fused_ordering(448) 00:14:07.978 fused_ordering(449) 00:14:07.978 fused_ordering(450) 00:14:07.978 fused_ordering(451) 00:14:07.978 fused_ordering(452) 00:14:07.978 fused_ordering(453) 00:14:07.978 fused_ordering(454) 00:14:07.978 fused_ordering(455) 00:14:07.978 fused_ordering(456) 00:14:07.978 fused_ordering(457) 00:14:07.978 fused_ordering(458) 00:14:07.978 fused_ordering(459) 00:14:07.978 fused_ordering(460) 00:14:07.978 fused_ordering(461) 00:14:07.978 fused_ordering(462) 00:14:07.978 fused_ordering(463) 00:14:07.978 fused_ordering(464) 00:14:07.978 fused_ordering(465) 00:14:07.978 fused_ordering(466) 00:14:07.978 fused_ordering(467) 00:14:07.978 fused_ordering(468) 00:14:07.978 fused_ordering(469) 00:14:07.978 fused_ordering(470) 00:14:07.978 fused_ordering(471) 00:14:07.978 fused_ordering(472) 00:14:07.978 fused_ordering(473) 00:14:07.978 fused_ordering(474) 00:14:07.978 fused_ordering(475) 00:14:07.978 fused_ordering(476) 00:14:07.978 fused_ordering(477) 00:14:07.978 fused_ordering(478) 00:14:07.978 fused_ordering(479) 00:14:07.978 fused_ordering(480) 00:14:07.978 fused_ordering(481) 00:14:07.978 fused_ordering(482) 00:14:07.978 fused_ordering(483) 00:14:07.978 fused_ordering(484) 00:14:07.978 fused_ordering(485) 00:14:07.978 fused_ordering(486) 00:14:07.978 fused_ordering(487) 00:14:07.978 fused_ordering(488) 00:14:07.978 fused_ordering(489) 00:14:07.978 fused_ordering(490) 00:14:07.978 fused_ordering(491) 00:14:07.978 fused_ordering(492) 00:14:07.978 fused_ordering(493) 00:14:07.978 fused_ordering(494) 00:14:07.978 fused_ordering(495) 00:14:07.978 fused_ordering(496) 00:14:07.978 fused_ordering(497) 00:14:07.978 fused_ordering(498) 00:14:07.978 fused_ordering(499) 00:14:07.978 fused_ordering(500) 00:14:07.978 fused_ordering(501) 00:14:07.978 fused_ordering(502) 00:14:07.978 fused_ordering(503) 00:14:07.978 fused_ordering(504) 00:14:07.978 fused_ordering(505) 00:14:07.978 fused_ordering(506) 00:14:07.978 fused_ordering(507) 00:14:07.978 fused_ordering(508) 00:14:07.978 fused_ordering(509) 00:14:07.978 fused_ordering(510) 00:14:07.978 fused_ordering(511) 00:14:07.978 fused_ordering(512) 00:14:07.978 fused_ordering(513) 00:14:07.978 fused_ordering(514) 00:14:07.978 fused_ordering(515) 00:14:07.978 fused_ordering(516) 00:14:07.978 fused_ordering(517) 00:14:07.978 fused_ordering(518) 00:14:07.978 fused_ordering(519) 00:14:07.978 fused_ordering(520) 00:14:07.978 fused_ordering(521) 00:14:07.978 fused_ordering(522) 00:14:07.978 fused_ordering(523) 00:14:07.978 fused_ordering(524) 00:14:07.978 fused_ordering(525) 00:14:07.978 fused_ordering(526) 00:14:07.978 fused_ordering(527) 00:14:07.978 fused_ordering(528) 00:14:07.978 fused_ordering(529) 00:14:07.978 fused_ordering(530) 00:14:07.978 fused_ordering(531) 00:14:07.978 fused_ordering(532) 00:14:07.978 fused_ordering(533) 00:14:07.978 fused_ordering(534) 00:14:07.978 fused_ordering(535) 00:14:07.978 fused_ordering(536) 00:14:07.978 fused_ordering(537) 00:14:07.978 fused_ordering(538) 00:14:07.978 fused_ordering(539) 00:14:07.978 fused_ordering(540) 00:14:07.978 fused_ordering(541) 00:14:07.978 fused_ordering(542) 00:14:07.978 fused_ordering(543) 00:14:07.978 fused_ordering(544) 00:14:07.978 fused_ordering(545) 00:14:07.978 fused_ordering(546) 00:14:07.978 fused_ordering(547) 00:14:07.978 fused_ordering(548) 00:14:07.978 fused_ordering(549) 00:14:07.978 fused_ordering(550) 00:14:07.978 fused_ordering(551) 00:14:07.978 fused_ordering(552) 00:14:07.978 fused_ordering(553) 00:14:07.978 fused_ordering(554) 00:14:07.978 fused_ordering(555) 00:14:07.978 fused_ordering(556) 00:14:07.978 fused_ordering(557) 00:14:07.978 fused_ordering(558) 00:14:07.978 fused_ordering(559) 00:14:07.978 fused_ordering(560) 00:14:07.978 fused_ordering(561) 00:14:07.978 fused_ordering(562) 00:14:07.978 fused_ordering(563) 00:14:07.978 fused_ordering(564) 00:14:07.978 fused_ordering(565) 00:14:07.978 fused_ordering(566) 00:14:07.978 fused_ordering(567) 00:14:07.978 fused_ordering(568) 00:14:07.978 fused_ordering(569) 00:14:07.978 fused_ordering(570) 00:14:07.978 fused_ordering(571) 00:14:07.978 fused_ordering(572) 00:14:07.978 fused_ordering(573) 00:14:07.978 fused_ordering(574) 00:14:07.978 fused_ordering(575) 00:14:07.978 fused_ordering(576) 00:14:07.978 fused_ordering(577) 00:14:07.978 fused_ordering(578) 00:14:07.978 fused_ordering(579) 00:14:07.978 fused_ordering(580) 00:14:07.978 fused_ordering(581) 00:14:07.978 fused_ordering(582) 00:14:07.978 fused_ordering(583) 00:14:07.978 fused_ordering(584) 00:14:07.978 fused_ordering(585) 00:14:07.978 fused_ordering(586) 00:14:07.978 fused_ordering(587) 00:14:07.978 fused_ordering(588) 00:14:07.978 fused_ordering(589) 00:14:07.978 fused_ordering(590) 00:14:07.978 fused_ordering(591) 00:14:07.978 fused_ordering(592) 00:14:07.978 fused_ordering(593) 00:14:07.978 fused_ordering(594) 00:14:07.978 fused_ordering(595) 00:14:07.978 fused_ordering(596) 00:14:07.978 fused_ordering(597) 00:14:07.978 fused_ordering(598) 00:14:07.978 fused_ordering(599) 00:14:07.978 fused_ordering(600) 00:14:07.978 fused_ordering(601) 00:14:07.978 fused_ordering(602) 00:14:07.978 fused_ordering(603) 00:14:07.978 fused_ordering(604) 00:14:07.978 fused_ordering(605) 00:14:07.978 fused_ordering(606) 00:14:07.978 fused_ordering(607) 00:14:07.978 fused_ordering(608) 00:14:07.978 fused_ordering(609) 00:14:07.978 fused_ordering(610) 00:14:07.978 fused_ordering(611) 00:14:07.978 fused_ordering(612) 00:14:07.978 fused_ordering(613) 00:14:07.978 fused_ordering(614) 00:14:07.978 fused_ordering(615) 00:14:08.545 fused_ordering(616) 00:14:08.545 fused_ordering(617) 00:14:08.545 fused_ordering(618) 00:14:08.545 fused_ordering(619) 00:14:08.545 fused_ordering(620) 00:14:08.545 fused_ordering(621) 00:14:08.545 fused_ordering(622) 00:14:08.545 fused_ordering(623) 00:14:08.545 fused_ordering(624) 00:14:08.545 fused_ordering(625) 00:14:08.545 fused_ordering(626) 00:14:08.545 fused_ordering(627) 00:14:08.545 fused_ordering(628) 00:14:08.545 fused_ordering(629) 00:14:08.545 fused_ordering(630) 00:14:08.545 fused_ordering(631) 00:14:08.545 fused_ordering(632) 00:14:08.545 fused_ordering(633) 00:14:08.545 fused_ordering(634) 00:14:08.545 fused_ordering(635) 00:14:08.545 fused_ordering(636) 00:14:08.545 fused_ordering(637) 00:14:08.545 fused_ordering(638) 00:14:08.545 fused_ordering(639) 00:14:08.545 fused_ordering(640) 00:14:08.545 fused_ordering(641) 00:14:08.545 fused_ordering(642) 00:14:08.545 fused_ordering(643) 00:14:08.545 fused_ordering(644) 00:14:08.545 fused_ordering(645) 00:14:08.545 fused_ordering(646) 00:14:08.545 fused_ordering(647) 00:14:08.545 fused_ordering(648) 00:14:08.545 fused_ordering(649) 00:14:08.545 fused_ordering(650) 00:14:08.545 fused_ordering(651) 00:14:08.545 fused_ordering(652) 00:14:08.545 fused_ordering(653) 00:14:08.545 fused_ordering(654) 00:14:08.545 fused_ordering(655) 00:14:08.545 fused_ordering(656) 00:14:08.545 fused_ordering(657) 00:14:08.545 fused_ordering(658) 00:14:08.545 fused_ordering(659) 00:14:08.545 fused_ordering(660) 00:14:08.545 fused_ordering(661) 00:14:08.545 fused_ordering(662) 00:14:08.545 fused_ordering(663) 00:14:08.545 fused_ordering(664) 00:14:08.545 fused_ordering(665) 00:14:08.545 fused_ordering(666) 00:14:08.545 fused_ordering(667) 00:14:08.545 fused_ordering(668) 00:14:08.545 fused_ordering(669) 00:14:08.545 fused_ordering(670) 00:14:08.545 fused_ordering(671) 00:14:08.545 fused_ordering(672) 00:14:08.545 fused_ordering(673) 00:14:08.545 fused_ordering(674) 00:14:08.545 fused_ordering(675) 00:14:08.545 fused_ordering(676) 00:14:08.545 fused_ordering(677) 00:14:08.545 fused_ordering(678) 00:14:08.545 fused_ordering(679) 00:14:08.545 fused_ordering(680) 00:14:08.545 fused_ordering(681) 00:14:08.545 fused_ordering(682) 00:14:08.545 fused_ordering(683) 00:14:08.545 fused_ordering(684) 00:14:08.545 fused_ordering(685) 00:14:08.545 fused_ordering(686) 00:14:08.545 fused_ordering(687) 00:14:08.545 fused_ordering(688) 00:14:08.545 fused_ordering(689) 00:14:08.545 fused_ordering(690) 00:14:08.545 fused_ordering(691) 00:14:08.545 fused_ordering(692) 00:14:08.545 fused_ordering(693) 00:14:08.545 fused_ordering(694) 00:14:08.545 fused_ordering(695) 00:14:08.545 fused_ordering(696) 00:14:08.545 fused_ordering(697) 00:14:08.545 fused_ordering(698) 00:14:08.545 fused_ordering(699) 00:14:08.545 fused_ordering(700) 00:14:08.545 fused_ordering(701) 00:14:08.545 fused_ordering(702) 00:14:08.545 fused_ordering(703) 00:14:08.545 fused_ordering(704) 00:14:08.545 fused_ordering(705) 00:14:08.545 fused_ordering(706) 00:14:08.545 fused_ordering(707) 00:14:08.545 fused_ordering(708) 00:14:08.545 fused_ordering(709) 00:14:08.545 fused_ordering(710) 00:14:08.545 fused_ordering(711) 00:14:08.545 fused_ordering(712) 00:14:08.545 fused_ordering(713) 00:14:08.545 fused_ordering(714) 00:14:08.545 fused_ordering(715) 00:14:08.545 fused_ordering(716) 00:14:08.545 fused_ordering(717) 00:14:08.545 fused_ordering(718) 00:14:08.545 fused_ordering(719) 00:14:08.545 fused_ordering(720) 00:14:08.545 fused_ordering(721) 00:14:08.545 fused_ordering(722) 00:14:08.545 fused_ordering(723) 00:14:08.545 fused_ordering(724) 00:14:08.545 fused_ordering(725) 00:14:08.545 fused_ordering(726) 00:14:08.545 fused_ordering(727) 00:14:08.545 fused_ordering(728) 00:14:08.545 fused_ordering(729) 00:14:08.545 fused_ordering(730) 00:14:08.545 fused_ordering(731) 00:14:08.545 fused_ordering(732) 00:14:08.545 fused_ordering(733) 00:14:08.545 fused_ordering(734) 00:14:08.545 fused_ordering(735) 00:14:08.545 fused_ordering(736) 00:14:08.545 fused_ordering(737) 00:14:08.545 fused_ordering(738) 00:14:08.545 fused_ordering(739) 00:14:08.545 fused_ordering(740) 00:14:08.545 fused_ordering(741) 00:14:08.545 fused_ordering(742) 00:14:08.545 fused_ordering(743) 00:14:08.545 fused_ordering(744) 00:14:08.545 fused_ordering(745) 00:14:08.545 fused_ordering(746) 00:14:08.545 fused_ordering(747) 00:14:08.545 fused_ordering(748) 00:14:08.545 fused_ordering(749) 00:14:08.545 fused_ordering(750) 00:14:08.545 fused_ordering(751) 00:14:08.545 fused_ordering(752) 00:14:08.545 fused_ordering(753) 00:14:08.545 fused_ordering(754) 00:14:08.545 fused_ordering(755) 00:14:08.545 fused_ordering(756) 00:14:08.545 fused_ordering(757) 00:14:08.545 fused_ordering(758) 00:14:08.545 fused_ordering(759) 00:14:08.545 fused_ordering(760) 00:14:08.545 fused_ordering(761) 00:14:08.545 fused_ordering(762) 00:14:08.545 fused_ordering(763) 00:14:08.545 fused_ordering(764) 00:14:08.545 fused_ordering(765) 00:14:08.545 fused_ordering(766) 00:14:08.545 fused_ordering(767) 00:14:08.545 fused_ordering(768) 00:14:08.545 fused_ordering(769) 00:14:08.545 fused_ordering(770) 00:14:08.545 fused_ordering(771) 00:14:08.545 fused_ordering(772) 00:14:08.545 fused_ordering(773) 00:14:08.545 fused_ordering(774) 00:14:08.545 fused_ordering(775) 00:14:08.545 fused_ordering(776) 00:14:08.545 fused_ordering(777) 00:14:08.545 fused_ordering(778) 00:14:08.545 fused_ordering(779) 00:14:08.545 fused_ordering(780) 00:14:08.545 fused_ordering(781) 00:14:08.545 fused_ordering(782) 00:14:08.545 fused_ordering(783) 00:14:08.545 fused_ordering(784) 00:14:08.545 fused_ordering(785) 00:14:08.545 fused_ordering(786) 00:14:08.545 fused_ordering(787) 00:14:08.545 fused_ordering(788) 00:14:08.545 fused_ordering(789) 00:14:08.545 fused_ordering(790) 00:14:08.545 fused_ordering(791) 00:14:08.545 fused_ordering(792) 00:14:08.545 fused_ordering(793) 00:14:08.545 fused_ordering(794) 00:14:08.545 fused_ordering(795) 00:14:08.545 fused_ordering(796) 00:14:08.545 fused_ordering(797) 00:14:08.545 fused_ordering(798) 00:14:08.545 fused_ordering(799) 00:14:08.545 fused_ordering(800) 00:14:08.545 fused_ordering(801) 00:14:08.545 fused_ordering(802) 00:14:08.545 fused_ordering(803) 00:14:08.545 fused_ordering(804) 00:14:08.545 fused_ordering(805) 00:14:08.545 fused_ordering(806) 00:14:08.545 fused_ordering(807) 00:14:08.545 fused_ordering(808) 00:14:08.545 fused_ordering(809) 00:14:08.545 fused_ordering(810) 00:14:08.545 fused_ordering(811) 00:14:08.545 fused_ordering(812) 00:14:08.545 fused_ordering(813) 00:14:08.545 fused_ordering(814) 00:14:08.545 fused_ordering(815) 00:14:08.545 fused_ordering(816) 00:14:08.545 fused_ordering(817) 00:14:08.545 fused_ordering(818) 00:14:08.546 fused_ordering(819) 00:14:08.546 fused_ordering(820) 00:14:09.478 fused_ordering(821) 00:14:09.478 fused_ordering(822) 00:14:09.478 fused_ordering(823) 00:14:09.478 fused_ordering(824) 00:14:09.479 fused_ordering(825) 00:14:09.479 fused_ordering(826) 00:14:09.479 fused_ordering(827) 00:14:09.479 fused_ordering(828) 00:14:09.479 fused_ordering(829) 00:14:09.479 fused_ordering(830) 00:14:09.479 fused_ordering(831) 00:14:09.479 fused_ordering(832) 00:14:09.479 fused_ordering(833) 00:14:09.479 fused_ordering(834) 00:14:09.479 fused_ordering(835) 00:14:09.479 fused_ordering(836) 00:14:09.479 fused_ordering(837) 00:14:09.479 fused_ordering(838) 00:14:09.479 fused_ordering(839) 00:14:09.479 fused_ordering(840) 00:14:09.479 fused_ordering(841) 00:14:09.479 fused_ordering(842) 00:14:09.479 fused_ordering(843) 00:14:09.479 fused_ordering(844) 00:14:09.479 fused_ordering(845) 00:14:09.479 fused_ordering(846) 00:14:09.479 fused_ordering(847) 00:14:09.479 fused_ordering(848) 00:14:09.479 fused_ordering(849) 00:14:09.479 fused_ordering(850) 00:14:09.479 fused_ordering(851) 00:14:09.479 fused_ordering(852) 00:14:09.479 fused_ordering(853) 00:14:09.479 fused_ordering(854) 00:14:09.479 fused_ordering(855) 00:14:09.479 fused_ordering(856) 00:14:09.479 fused_ordering(857) 00:14:09.479 fused_ordering(858) 00:14:09.479 fused_ordering(859) 00:14:09.479 fused_ordering(860) 00:14:09.479 fused_ordering(861) 00:14:09.479 fused_ordering(862) 00:14:09.479 fused_ordering(863) 00:14:09.479 fused_ordering(864) 00:14:09.479 fused_ordering(865) 00:14:09.479 fused_ordering(866) 00:14:09.479 fused_ordering(867) 00:14:09.479 fused_ordering(868) 00:14:09.479 fused_ordering(869) 00:14:09.479 fused_ordering(870) 00:14:09.479 fused_ordering(871) 00:14:09.479 fused_ordering(872) 00:14:09.479 fused_ordering(873) 00:14:09.479 fused_ordering(874) 00:14:09.479 fused_ordering(875) 00:14:09.479 fused_ordering(876) 00:14:09.479 fused_ordering(877) 00:14:09.479 fused_ordering(878) 00:14:09.479 fused_ordering(879) 00:14:09.479 fused_ordering(880) 00:14:09.479 fused_ordering(881) 00:14:09.479 fused_ordering(882) 00:14:09.479 fused_ordering(883) 00:14:09.479 fused_ordering(884) 00:14:09.479 fused_ordering(885) 00:14:09.479 fused_ordering(886) 00:14:09.479 fused_ordering(887) 00:14:09.479 fused_ordering(888) 00:14:09.479 fused_ordering(889) 00:14:09.479 fused_ordering(890) 00:14:09.479 fused_ordering(891) 00:14:09.479 fused_ordering(892) 00:14:09.479 fused_ordering(893) 00:14:09.479 fused_ordering(894) 00:14:09.479 fused_ordering(895) 00:14:09.479 fused_ordering(896) 00:14:09.479 fused_ordering(897) 00:14:09.479 fused_ordering(898) 00:14:09.479 fused_ordering(899) 00:14:09.479 fused_ordering(900) 00:14:09.479 fused_ordering(901) 00:14:09.479 fused_ordering(902) 00:14:09.479 fused_ordering(903) 00:14:09.479 fused_ordering(904) 00:14:09.479 fused_ordering(905) 00:14:09.479 fused_ordering(906) 00:14:09.479 fused_ordering(907) 00:14:09.479 fused_ordering(908) 00:14:09.479 fused_ordering(909) 00:14:09.479 fused_ordering(910) 00:14:09.479 fused_ordering(911) 00:14:09.479 fused_ordering(912) 00:14:09.479 fused_ordering(913) 00:14:09.479 fused_ordering(914) 00:14:09.479 fused_ordering(915) 00:14:09.479 fused_ordering(916) 00:14:09.479 fused_ordering(917) 00:14:09.479 fused_ordering(918) 00:14:09.479 fused_ordering(919) 00:14:09.479 fused_ordering(920) 00:14:09.479 fused_ordering(921) 00:14:09.479 fused_ordering(922) 00:14:09.479 fused_ordering(923) 00:14:09.479 fused_ordering(924) 00:14:09.479 fused_ordering(925) 00:14:09.479 fused_ordering(926) 00:14:09.479 fused_ordering(927) 00:14:09.479 fused_ordering(928) 00:14:09.479 fused_ordering(929) 00:14:09.479 fused_ordering(930) 00:14:09.479 fused_ordering(931) 00:14:09.479 fused_ordering(932) 00:14:09.479 fused_ordering(933) 00:14:09.479 fused_ordering(934) 00:14:09.479 fused_ordering(935) 00:14:09.479 fused_ordering(936) 00:14:09.479 fused_ordering(937) 00:14:09.479 fused_ordering(938) 00:14:09.479 fused_ordering(939) 00:14:09.479 fused_ordering(940) 00:14:09.479 fused_ordering(941) 00:14:09.479 fused_ordering(942) 00:14:09.479 fused_ordering(943) 00:14:09.479 fused_ordering(944) 00:14:09.479 fused_ordering(945) 00:14:09.479 fused_ordering(946) 00:14:09.479 fused_ordering(947) 00:14:09.479 fused_ordering(948) 00:14:09.479 fused_ordering(949) 00:14:09.479 fused_ordering(950) 00:14:09.479 fused_ordering(951) 00:14:09.479 fused_ordering(952) 00:14:09.479 fused_ordering(953) 00:14:09.479 fused_ordering(954) 00:14:09.479 fused_ordering(955) 00:14:09.479 fused_ordering(956) 00:14:09.479 fused_ordering(957) 00:14:09.479 fused_ordering(958) 00:14:09.479 fused_ordering(959) 00:14:09.479 fused_ordering(960) 00:14:09.479 fused_ordering(961) 00:14:09.479 fused_ordering(962) 00:14:09.479 fused_ordering(963) 00:14:09.479 fused_ordering(964) 00:14:09.479 fused_ordering(965) 00:14:09.479 fused_ordering(966) 00:14:09.479 fused_ordering(967) 00:14:09.479 fused_ordering(968) 00:14:09.479 fused_ordering(969) 00:14:09.479 fused_ordering(970) 00:14:09.479 fused_ordering(971) 00:14:09.479 fused_ordering(972) 00:14:09.479 fused_ordering(973) 00:14:09.479 fused_ordering(974) 00:14:09.479 fused_ordering(975) 00:14:09.479 fused_ordering(976) 00:14:09.479 fused_ordering(977) 00:14:09.479 fused_ordering(978) 00:14:09.479 fused_ordering(979) 00:14:09.479 fused_ordering(980) 00:14:09.479 fused_ordering(981) 00:14:09.479 fused_ordering(982) 00:14:09.479 fused_ordering(983) 00:14:09.479 fused_ordering(984) 00:14:09.479 fused_ordering(985) 00:14:09.479 fused_ordering(986) 00:14:09.479 fused_ordering(987) 00:14:09.479 fused_ordering(988) 00:14:09.479 fused_ordering(989) 00:14:09.479 fused_ordering(990) 00:14:09.479 fused_ordering(991) 00:14:09.479 fused_ordering(992) 00:14:09.479 fused_ordering(993) 00:14:09.479 fused_ordering(994) 00:14:09.479 fused_ordering(995) 00:14:09.479 fused_ordering(996) 00:14:09.479 fused_ordering(997) 00:14:09.479 fused_ordering(998) 00:14:09.479 fused_ordering(999) 00:14:09.479 fused_ordering(1000) 00:14:09.479 fused_ordering(1001) 00:14:09.479 fused_ordering(1002) 00:14:09.479 fused_ordering(1003) 00:14:09.479 fused_ordering(1004) 00:14:09.479 fused_ordering(1005) 00:14:09.479 fused_ordering(1006) 00:14:09.479 fused_ordering(1007) 00:14:09.479 fused_ordering(1008) 00:14:09.479 fused_ordering(1009) 00:14:09.479 fused_ordering(1010) 00:14:09.479 fused_ordering(1011) 00:14:09.479 fused_ordering(1012) 00:14:09.479 fused_ordering(1013) 00:14:09.479 fused_ordering(1014) 00:14:09.479 fused_ordering(1015) 00:14:09.479 fused_ordering(1016) 00:14:09.479 fused_ordering(1017) 00:14:09.479 fused_ordering(1018) 00:14:09.479 fused_ordering(1019) 00:14:09.479 fused_ordering(1020) 00:14:09.479 fused_ordering(1021) 00:14:09.479 fused_ordering(1022) 00:14:09.479 fused_ordering(1023) 00:14:09.479 04:30:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:09.479 04:30:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:09.479 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.479 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:09.479 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.479 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:09.479 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.479 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.479 rmmod nvme_tcp 00:14:09.479 rmmod nvme_fabrics 00:14:09.479 rmmod nvme_keyring 00:14:09.479 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2742791 ']' 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2742791 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 2742791 ']' 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 2742791 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2742791 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2742791' 00:14:09.738 killing process with pid 2742791 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 2742791 00:14:09.738 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 2742791 00:14:09.996 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.996 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.996 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.996 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.996 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.996 04:30:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.996 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.996 04:30:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.900 04:30:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.900 00:14:11.900 real 0m8.567s 00:14:11.900 user 0m6.199s 00:14:11.900 sys 0m4.268s 00:14:11.900 04:30:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:11.900 04:30:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.900 ************************************ 00:14:11.900 END TEST nvmf_fused_ordering 00:14:11.900 ************************************ 00:14:11.900 04:30:32 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:11.900 04:30:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:11.900 04:30:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:11.900 04:30:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.900 ************************************ 00:14:11.900 START TEST nvmf_delete_subsystem 00:14:11.900 ************************************ 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:11.900 * Looking for test storage... 00:14:11.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.900 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.159 04:30:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:14.060 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:14.060 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:14.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:14.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.060 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:14.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:14:14.061 00:14:14.061 --- 10.0.0.2 ping statistics --- 00:14:14.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.061 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:14:14.061 00:14:14.061 --- 10.0.0.1 ping statistics --- 00:14:14.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.061 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:14.061 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2745148 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2745148 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 2745148 ']' 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.319 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.319 [2024-07-14 04:30:34.318517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:14.319 [2024-07-14 04:30:34.318592] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.319 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.319 [2024-07-14 04:30:34.382525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:14.319 [2024-07-14 04:30:34.465942] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.319 [2024-07-14 04:30:34.465994] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.319 [2024-07-14 04:30:34.466023] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.319 [2024-07-14 04:30:34.466034] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.319 [2024-07-14 04:30:34.466044] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.319 [2024-07-14 04:30:34.466115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.319 [2024-07-14 04:30:34.466121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.577 [2024-07-14 04:30:34.598402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.577 [2024-07-14 04:30:34.614740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.577 NULL1 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.577 Delay0 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2745269 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:14.577 04:30:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:14.577 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.577 [2024-07-14 04:30:34.689392] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:16.474 04:30:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.474 04:30:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.474 04:30:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 [2024-07-14 04:30:36.861749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625180 is same with the state(5) to be set 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Write completed with error (sct=0, sc=8) 00:14:16.732 starting I/O failed: -6 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.732 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 starting I/O failed: -6 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 starting I/O failed: -6 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 starting I/O failed: -6 00:14:16.733 [2024-07-14 04:30:36.862433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f13cc00c2f0 is same with the state(5) to be set 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Write completed with error (sct=0, sc=8) 00:14:16.733 Read completed with error (sct=0, sc=8) 00:14:17.666 [2024-07-14 04:30:37.828882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16288b0 is same with the state(5) to be set 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 [2024-07-14 04:30:37.863547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625aa0 is same with the state(5) to be set 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 [2024-07-14 04:30:37.864004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625360 is same with the state(5) to be set 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 [2024-07-14 04:30:37.864410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f13cc00bfe0 is same with the state(5) to be set 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 Read completed with error (sct=0, sc=8) 00:14:17.924 04:30:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.924 Write completed with error (sct=0, sc=8) 00:14:17.924 [2024-07-14 04:30:37.865077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f13cc00c600 is same with the state(5) to be set 00:14:17.924 04:30:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:17.924 04:30:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2745269 00:14:17.924 Initializing NVMe Controllers 00:14:17.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.924 Controller IO queue size 128, less than required. 00:14:17.924 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:17.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:17.924 Initialization complete. Launching workers. 00:14:17.924 ======================================================== 00:14:17.924 Latency(us) 00:14:17.924 Device Information : IOPS MiB/s Average min max 00:14:17.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.72 0.08 901665.31 600.90 1012945.63 00:14:17.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.73 0.08 906732.61 352.56 1013253.26 00:14:17.925 ======================================================== 00:14:17.925 Total : 331.45 0.16 904183.79 352.56 1013253.26 00:14:17.925 00:14:17.925 04:30:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:17.925 [2024-07-14 04:30:37.865624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16288b0 (9): Bad file descriptor 00:14:17.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2745269 00:14:18.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2745269) - No such process 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2745269 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2745269 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2745269 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.182 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.439 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.440 [2024-07-14 04:30:38.386402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2745695 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2745695 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:18.440 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:18.440 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.440 [2024-07-14 04:30:38.442228] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:19.005 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:19.005 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2745695 00:14:19.005 04:30:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:19.262 04:30:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:19.262 04:30:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2745695 00:14:19.262 04:30:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:19.827 04:30:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:19.827 04:30:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2745695 00:14:19.827 04:30:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:20.391 04:30:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:20.391 04:30:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2745695 00:14:20.391 04:30:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:20.955 04:30:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:20.956 04:30:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2745695 00:14:20.956 04:30:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:21.520 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:21.520 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2745695 00:14:21.520 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:21.520 Initializing NVMe Controllers 00:14:21.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.520 Controller IO queue size 128, less than required. 00:14:21.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:21.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:21.520 Initialization complete. Launching workers. 00:14:21.520 ======================================================== 00:14:21.520 Latency(us) 00:14:21.520 Device Information : IOPS MiB/s Average min max 00:14:21.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003694.32 1000241.32 1011370.78 00:14:21.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006587.21 1000253.27 1045369.11 00:14:21.520 ======================================================== 00:14:21.520 Total : 256.00 0.12 1005140.77 1000241.32 1045369.11 00:14:21.520 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2745695 00:14:21.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2745695) - No such process 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2745695 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.778 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.778 rmmod nvme_tcp 00:14:21.778 rmmod nvme_fabrics 00:14:21.778 rmmod nvme_keyring 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2745148 ']' 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2745148 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 2745148 ']' 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 2745148 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:22.036 04:30:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2745148 00:14:22.036 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:22.036 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:22.036 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2745148' 00:14:22.036 killing process with pid 2745148 00:14:22.036 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 2745148 00:14:22.036 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 2745148 00:14:22.295 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.295 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.295 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.295 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.295 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.295 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.295 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.295 04:30:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.199 04:30:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.199 00:14:24.199 real 0m12.266s 00:14:24.199 user 0m27.728s 00:14:24.199 sys 0m2.955s 00:14:24.199 04:30:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:24.199 04:30:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.199 ************************************ 00:14:24.199 END TEST nvmf_delete_subsystem 00:14:24.199 ************************************ 00:14:24.199 04:30:44 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:24.199 04:30:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:24.199 04:30:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:24.199 04:30:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.199 ************************************ 00:14:24.199 START TEST nvmf_ns_masking 00:14:24.199 ************************************ 00:14:24.199 04:30:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:24.199 * Looking for test storage... 00:14:24.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.457 04:30:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=7b213411-7661-4b9d-a7bc-bae3bd132a69 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.458 04:30:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:26.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:26.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:26.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:26.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.416 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:14:26.417 00:14:26.417 --- 10.0.0.2 ping statistics --- 00:14:26.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.417 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:14:26.417 00:14:26.417 --- 10.0.0.1 ping statistics --- 00:14:26.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.417 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2748040 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2748040 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 2748040 ']' 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:26.417 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.417 [2024-07-14 04:30:46.550353] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:26.417 [2024-07-14 04:30:46.550446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.417 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.675 [2024-07-14 04:30:46.618311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.675 [2024-07-14 04:30:46.708703] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.675 [2024-07-14 04:30:46.708764] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.675 [2024-07-14 04:30:46.708792] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.675 [2024-07-14 04:30:46.708803] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.675 [2024-07-14 04:30:46.708813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.675 [2024-07-14 04:30:46.708969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.675 [2024-07-14 04:30:46.708998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.675 [2024-07-14 04:30:46.709044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.675 [2024-07-14 04:30:46.709047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.675 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:26.675 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:26.675 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.675 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.675 04:30:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.675 04:30:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.675 04:30:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:27.241 [2024-07-14 04:30:47.130640] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.241 04:30:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:27.241 04:30:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:27.241 04:30:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:27.500 Malloc1 00:14:27.500 04:30:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:27.758 Malloc2 00:14:27.758 04:30:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:27.758 04:30:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:28.016 04:30:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.275 [2024-07-14 04:30:48.420241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.275 04:30:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:28.275 04:30:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7b213411-7661-4b9d-a7bc-bae3bd132a69 -a 10.0.0.2 -s 4420 -i 4 00:14:28.536 04:30:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.536 04:30:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:28.536 04:30:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.536 04:30:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:28.536 04:30:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:31.071 [ 0]:0x1 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bad9c90ba19d41558f2bba8d5f596f6f 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bad9c90ba19d41558f2bba8d5f596f6f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.071 04:30:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:31.071 [ 0]:0x1 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bad9c90ba19d41558f2bba8d5f596f6f 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bad9c90ba19d41558f2bba8d5f596f6f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:31.071 [ 1]:0x2 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56a7675b4c3b46618af34932cde123a3 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56a7675b4c3b46618af34932cde123a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:31.071 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.330 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.330 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:31.587 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:31.587 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7b213411-7661-4b9d-a7bc-bae3bd132a69 -a 10.0.0.2 -s 4420 -i 4 00:14:31.844 04:30:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:31.844 04:30:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:31.844 04:30:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.844 04:30:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:31.844 04:30:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:31.844 04:30:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:33.747 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:33.747 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:33.748 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.748 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:33.748 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.748 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:33.748 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:33.748 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:34.006 [ 0]:0x2 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.006 04:30:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.006 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56a7675b4c3b46618af34932cde123a3 00:14:34.006 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56a7675b4c3b46618af34932cde123a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.006 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:34.264 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:34.264 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:34.265 [ 0]:0x1 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bad9c90ba19d41558f2bba8d5f596f6f 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bad9c90ba19d41558f2bba8d5f596f6f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:34.265 [ 1]:0x2 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56a7675b4c3b46618af34932cde123a3 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56a7675b4c3b46618af34932cde123a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.265 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.523 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:34.781 [ 0]:0x2 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56a7675b4c3b46618af34932cde123a3 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56a7675b4c3b46618af34932cde123a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.781 04:30:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.041 04:30:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:35.041 04:30:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7b213411-7661-4b9d-a7bc-bae3bd132a69 -a 10.0.0.2 -s 4420 -i 4 00:14:35.301 04:30:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:35.301 04:30:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:35.301 04:30:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.301 04:30:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:35.301 04:30:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:35.301 04:30:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.204 [ 0]:0x1 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bad9c90ba19d41558f2bba8d5f596f6f 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bad9c90ba19d41558f2bba8d5f596f6f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:37.204 [ 1]:0x2 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.204 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.462 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56a7675b4c3b46618af34932cde123a3 00:14:37.462 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56a7675b4c3b46618af34932cde123a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.462 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:37.720 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:37.721 [ 0]:0x2 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56a7675b4c3b46618af34932cde123a3 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56a7675b4c3b46618af34932cde123a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:37.721 04:30:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:37.992 [2024-07-14 04:30:57.991270] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:37.992 request: 00:14:37.992 { 00:14:37.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.992 "nsid": 2, 00:14:37.992 "host": "nqn.2016-06.io.spdk:host1", 00:14:37.992 "method": "nvmf_ns_remove_host", 00:14:37.992 "req_id": 1 00:14:37.992 } 00:14:37.992 Got JSON-RPC error response 00:14:37.992 response: 00:14:37.992 { 00:14:37.992 "code": -32602, 00:14:37.992 "message": "Invalid parameters" 00:14:37.992 } 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:37.992 [ 0]:0x2 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56a7675b4c3b46618af34932cde123a3 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56a7675b4c3b46618af34932cde123a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.992 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.250 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:38.250 04:30:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:38.250 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:38.250 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:38.250 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:38.250 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:38.250 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.250 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:38.250 rmmod nvme_tcp 00:14:38.250 rmmod nvme_fabrics 00:14:38.251 rmmod nvme_keyring 00:14:38.251 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:38.251 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:38.251 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:38.251 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2748040 ']' 00:14:38.251 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2748040 00:14:38.251 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 2748040 ']' 00:14:38.251 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 2748040 00:14:38.251 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:38.509 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:38.509 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2748040 00:14:38.509 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:38.509 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:38.509 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2748040' 00:14:38.509 killing process with pid 2748040 00:14:38.509 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 2748040 00:14:38.509 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 2748040 00:14:38.768 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:38.768 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:38.768 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:38.768 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.768 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:38.768 04:30:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.768 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.768 04:30:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.670 04:31:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:40.670 00:14:40.670 real 0m16.459s 00:14:40.670 user 0m51.448s 00:14:40.670 sys 0m3.750s 00:14:40.670 04:31:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:40.670 04:31:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.670 ************************************ 00:14:40.670 END TEST nvmf_ns_masking 00:14:40.670 ************************************ 00:14:40.670 04:31:00 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:40.670 04:31:00 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:40.670 04:31:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:40.670 04:31:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:40.670 04:31:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:40.670 ************************************ 00:14:40.670 START TEST nvmf_nvme_cli 00:14:40.670 ************************************ 00:14:40.670 04:31:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:40.928 * Looking for test storage... 00:14:40.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.928 04:31:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:40.929 04:31:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:42.834 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:42.834 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:42.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:42.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.834 04:31:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.834 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:42.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:14:42.834 00:14:42.834 --- 10.0.0.2 ping statistics --- 00:14:42.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.834 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:42.834 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:14:42.834 00:14:42.834 --- 10.0.0.1 ping statistics --- 00:14:42.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.834 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:14:42.834 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.834 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:42.834 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.834 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.834 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.834 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.834 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.835 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.835 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2751577 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2751577 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 2751577 ']' 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:43.094 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.094 [2024-07-14 04:31:03.091350] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:43.094 [2024-07-14 04:31:03.091430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.094 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.094 [2024-07-14 04:31:03.157699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.094 [2024-07-14 04:31:03.247117] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.094 [2024-07-14 04:31:03.247190] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.094 [2024-07-14 04:31:03.247205] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.094 [2024-07-14 04:31:03.247216] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.094 [2024-07-14 04:31:03.247225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.094 [2024-07-14 04:31:03.247291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.094 [2024-07-14 04:31:03.247351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.094 [2024-07-14 04:31:03.247418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.094 [2024-07-14 04:31:03.247420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.351 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 [2024-07-14 04:31:03.404725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 Malloc0 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 Malloc1 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 [2024-07-14 04:31:03.485369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.352 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:43.609 00:14:43.609 Discovery Log Number of Records 2, Generation counter 2 00:14:43.609 =====Discovery Log Entry 0====== 00:14:43.609 trtype: tcp 00:14:43.609 adrfam: ipv4 00:14:43.609 subtype: current discovery subsystem 00:14:43.609 treq: not required 00:14:43.609 portid: 0 00:14:43.609 trsvcid: 4420 00:14:43.609 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:43.609 traddr: 10.0.0.2 00:14:43.609 eflags: explicit discovery connections, duplicate discovery information 00:14:43.609 sectype: none 00:14:43.609 =====Discovery Log Entry 1====== 00:14:43.609 trtype: tcp 00:14:43.609 adrfam: ipv4 00:14:43.609 subtype: nvme subsystem 00:14:43.609 treq: not required 00:14:43.609 portid: 0 00:14:43.609 trsvcid: 4420 00:14:43.609 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:43.609 traddr: 10.0.0.2 00:14:43.609 eflags: none 00:14:43.609 sectype: none 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:43.609 04:31:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:44.175 04:31:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:44.175 04:31:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:44.175 04:31:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.175 04:31:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:44.175 04:31:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:44.175 04:31:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:46.711 /dev/nvme0n1 ]] 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:46.711 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.971 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:46.972 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.972 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:46.972 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.972 04:31:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.972 rmmod nvme_tcp 00:14:46.972 rmmod nvme_fabrics 00:14:46.972 rmmod nvme_keyring 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2751577 ']' 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2751577 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 2751577 ']' 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 2751577 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2751577 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2751577' 00:14:46.972 killing process with pid 2751577 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 2751577 00:14:46.972 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 2751577 00:14:47.230 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.230 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:47.230 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:47.230 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.230 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.230 04:31:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.230 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.230 04:31:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.770 04:31:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:49.770 00:14:49.770 real 0m8.533s 00:14:49.770 user 0m16.648s 00:14:49.770 sys 0m2.181s 00:14:49.770 04:31:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.770 04:31:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.770 ************************************ 00:14:49.770 END TEST nvmf_nvme_cli 00:14:49.770 ************************************ 00:14:49.770 04:31:09 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:49.770 04:31:09 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:49.770 04:31:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:49.770 04:31:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.770 04:31:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.770 ************************************ 00:14:49.770 START TEST nvmf_vfio_user 00:14:49.770 ************************************ 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:49.770 * Looking for test storage... 00:14:49.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.770 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2752386 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2752386' 00:14:49.771 Process pid: 2752386 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2752386 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2752386 ']' 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:49.771 [2024-07-14 04:31:09.532445] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:49.771 [2024-07-14 04:31:09.532529] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.771 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.771 [2024-07-14 04:31:09.591514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.771 [2024-07-14 04:31:09.678442] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.771 [2024-07-14 04:31:09.678502] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.771 [2024-07-14 04:31:09.678516] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.771 [2024-07-14 04:31:09.678543] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.771 [2024-07-14 04:31:09.678554] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.771 [2024-07-14 04:31:09.678723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.771 [2024-07-14 04:31:09.678776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.771 [2024-07-14 04:31:09.678838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.771 [2024-07-14 04:31:09.678844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:49.771 04:31:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:50.707 04:31:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:50.965 04:31:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:50.965 04:31:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:50.965 04:31:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.965 04:31:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:50.965 04:31:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:51.224 Malloc1 00:14:51.481 04:31:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:51.740 04:31:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:51.998 04:31:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:52.256 04:31:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.256 04:31:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:52.256 04:31:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:52.514 Malloc2 00:14:52.514 04:31:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:52.772 04:31:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:53.030 04:31:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:53.289 04:31:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:53.289 04:31:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:53.289 04:31:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.289 04:31:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:53.289 04:31:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:53.289 04:31:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:53.289 [2024-07-14 04:31:13.273102] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:53.289 [2024-07-14 04:31:13.273147] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752926 ] 00:14:53.289 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.289 [2024-07-14 04:31:13.307132] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:53.289 [2024-07-14 04:31:13.309655] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:53.289 [2024-07-14 04:31:13.309686] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa285bc8000 00:14:53.289 [2024-07-14 04:31:13.313876] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.289 [2024-07-14 04:31:13.314662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.289 [2024-07-14 04:31:13.315669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.289 [2024-07-14 04:31:13.316672] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.289 [2024-07-14 04:31:13.317674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.289 [2024-07-14 04:31:13.318683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.289 [2024-07-14 04:31:13.319689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.289 [2024-07-14 04:31:13.320694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.289 [2024-07-14 04:31:13.321703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:53.289 [2024-07-14 04:31:13.321723] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa28497e000 00:14:53.289 [2024-07-14 04:31:13.322841] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:53.289 [2024-07-14 04:31:13.337468] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:53.289 [2024-07-14 04:31:13.337506] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:53.289 [2024-07-14 04:31:13.339814] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:53.289 [2024-07-14 04:31:13.339891] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:53.289 [2024-07-14 04:31:13.339987] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:53.289 [2024-07-14 04:31:13.340022] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:53.289 [2024-07-14 04:31:13.340033] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:53.289 [2024-07-14 04:31:13.344877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:53.289 [2024-07-14 04:31:13.344906] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:53.289 [2024-07-14 04:31:13.344920] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:53.289 [2024-07-14 04:31:13.345829] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:53.289 [2024-07-14 04:31:13.345863] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:53.289 [2024-07-14 04:31:13.345889] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.289 [2024-07-14 04:31:13.346832] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:53.289 [2024-07-14 04:31:13.346869] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.289 [2024-07-14 04:31:13.347836] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:53.289 [2024-07-14 04:31:13.347874] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:53.289 [2024-07-14 04:31:13.347886] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:53.289 [2024-07-14 04:31:13.347897] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.289 [2024-07-14 04:31:13.348008] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:53.289 [2024-07-14 04:31:13.348016] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.289 [2024-07-14 04:31:13.348025] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:53.289 [2024-07-14 04:31:13.348863] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:53.289 [2024-07-14 04:31:13.349864] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:53.289 [2024-07-14 04:31:13.350877] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:53.289 [2024-07-14 04:31:13.351869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.289 [2024-07-14 04:31:13.351967] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.289 [2024-07-14 04:31:13.352891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:53.289 [2024-07-14 04:31:13.352909] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.290 [2024-07-14 04:31:13.352918] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.352942] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:53.290 [2024-07-14 04:31:13.352961] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.352995] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.290 [2024-07-14 04:31:13.353005] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.290 [2024-07-14 04:31:13.353029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353121] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:53.290 [2024-07-14 04:31:13.353138] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:53.290 [2024-07-14 04:31:13.353161] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:53.290 [2024-07-14 04:31:13.353169] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:53.290 [2024-07-14 04:31:13.353177] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:53.290 [2024-07-14 04:31:13.353185] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:53.290 [2024-07-14 04:31:13.353192] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353206] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.290 [2024-07-14 04:31:13.353275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.290 [2024-07-14 04:31:13.353287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.290 [2024-07-14 04:31:13.353299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.290 [2024-07-14 04:31:13.353307] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353322] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353359] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:53.290 [2024-07-14 04:31:13.353368] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353379] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353393] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353485] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353501] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353519] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:53.290 [2024-07-14 04:31:13.353528] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:53.290 [2024-07-14 04:31:13.353538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353571] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:53.290 [2024-07-14 04:31:13.353588] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353603] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353614] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.290 [2024-07-14 04:31:13.353622] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.290 [2024-07-14 04:31:13.353631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353679] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353693] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353705] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.290 [2024-07-14 04:31:13.353712] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.290 [2024-07-14 04:31:13.353721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353749] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353761] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353775] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353787] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353795] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353804] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:53.290 [2024-07-14 04:31:13.353812] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:53.290 [2024-07-14 04:31:13.353820] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:53.290 [2024-07-14 04:31:13.353880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.353978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.353990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.354008] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:53.290 [2024-07-14 04:31:13.354017] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:53.290 [2024-07-14 04:31:13.354024] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:53.290 [2024-07-14 04:31:13.354030] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:53.290 [2024-07-14 04:31:13.354040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:53.290 [2024-07-14 04:31:13.354052] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:53.290 [2024-07-14 04:31:13.354059] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:53.290 [2024-07-14 04:31:13.354068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.354079] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:53.290 [2024-07-14 04:31:13.354087] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.290 [2024-07-14 04:31:13.354096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.354108] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:53.290 [2024-07-14 04:31:13.354116] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:53.290 [2024-07-14 04:31:13.354124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:53.290 [2024-07-14 04:31:13.354136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.354156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.354172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:53.290 [2024-07-14 04:31:13.354201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:53.290 ===================================================== 00:14:53.290 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.290 ===================================================== 00:14:53.290 Controller Capabilities/Features 00:14:53.290 ================================ 00:14:53.290 Vendor ID: 4e58 00:14:53.290 Subsystem Vendor ID: 4e58 00:14:53.290 Serial Number: SPDK1 00:14:53.290 Model Number: SPDK bdev Controller 00:14:53.290 Firmware Version: 24.05.1 00:14:53.290 Recommended Arb Burst: 6 00:14:53.290 IEEE OUI Identifier: 8d 6b 50 00:14:53.290 Multi-path I/O 00:14:53.290 May have multiple subsystem ports: Yes 00:14:53.290 May have multiple controllers: Yes 00:14:53.290 Associated with SR-IOV VF: No 00:14:53.290 Max Data Transfer Size: 131072 00:14:53.290 Max Number of Namespaces: 32 00:14:53.290 Max Number of I/O Queues: 127 00:14:53.290 NVMe Specification Version (VS): 1.3 00:14:53.290 NVMe Specification Version (Identify): 1.3 00:14:53.290 Maximum Queue Entries: 256 00:14:53.290 Contiguous Queues Required: Yes 00:14:53.290 Arbitration Mechanisms Supported 00:14:53.290 Weighted Round Robin: Not Supported 00:14:53.290 Vendor Specific: Not Supported 00:14:53.290 Reset Timeout: 15000 ms 00:14:53.290 Doorbell Stride: 4 bytes 00:14:53.290 NVM Subsystem Reset: Not Supported 00:14:53.290 Command Sets Supported 00:14:53.290 NVM Command Set: Supported 00:14:53.290 Boot Partition: Not Supported 00:14:53.290 Memory Page Size Minimum: 4096 bytes 00:14:53.290 Memory Page Size Maximum: 4096 bytes 00:14:53.290 Persistent Memory Region: Not Supported 00:14:53.290 Optional Asynchronous Events Supported 00:14:53.290 Namespace Attribute Notices: Supported 00:14:53.290 Firmware Activation Notices: Not Supported 00:14:53.290 ANA Change Notices: Not Supported 00:14:53.290 PLE Aggregate Log Change Notices: Not Supported 00:14:53.290 LBA Status Info Alert Notices: Not Supported 00:14:53.290 EGE Aggregate Log Change Notices: Not Supported 00:14:53.290 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.290 Zone Descriptor Change Notices: Not Supported 00:14:53.290 Discovery Log Change Notices: Not Supported 00:14:53.290 Controller Attributes 00:14:53.290 128-bit Host Identifier: Supported 00:14:53.290 Non-Operational Permissive Mode: Not Supported 00:14:53.290 NVM Sets: Not Supported 00:14:53.290 Read Recovery Levels: Not Supported 00:14:53.290 Endurance Groups: Not Supported 00:14:53.290 Predictable Latency Mode: Not Supported 00:14:53.290 Traffic Based Keep ALive: Not Supported 00:14:53.290 Namespace Granularity: Not Supported 00:14:53.290 SQ Associations: Not Supported 00:14:53.290 UUID List: Not Supported 00:14:53.290 Multi-Domain Subsystem: Not Supported 00:14:53.290 Fixed Capacity Management: Not Supported 00:14:53.290 Variable Capacity Management: Not Supported 00:14:53.290 Delete Endurance Group: Not Supported 00:14:53.290 Delete NVM Set: Not Supported 00:14:53.290 Extended LBA Formats Supported: Not Supported 00:14:53.290 Flexible Data Placement Supported: Not Supported 00:14:53.290 00:14:53.290 Controller Memory Buffer Support 00:14:53.290 ================================ 00:14:53.290 Supported: No 00:14:53.290 00:14:53.290 Persistent Memory Region Support 00:14:53.290 ================================ 00:14:53.290 Supported: No 00:14:53.290 00:14:53.290 Admin Command Set Attributes 00:14:53.290 ============================ 00:14:53.290 Security Send/Receive: Not Supported 00:14:53.290 Format NVM: Not Supported 00:14:53.290 Firmware Activate/Download: Not Supported 00:14:53.290 Namespace Management: Not Supported 00:14:53.290 Device Self-Test: Not Supported 00:14:53.290 Directives: Not Supported 00:14:53.290 NVMe-MI: Not Supported 00:14:53.290 Virtualization Management: Not Supported 00:14:53.290 Doorbell Buffer Config: Not Supported 00:14:53.290 Get LBA Status Capability: Not Supported 00:14:53.290 Command & Feature Lockdown Capability: Not Supported 00:14:53.290 Abort Command Limit: 4 00:14:53.290 Async Event Request Limit: 4 00:14:53.290 Number of Firmware Slots: N/A 00:14:53.290 Firmware Slot 1 Read-Only: N/A 00:14:53.290 Firmware Activation Without Reset: N/A 00:14:53.290 Multiple Update Detection Support: N/A 00:14:53.290 Firmware Update Granularity: No Information Provided 00:14:53.291 Per-Namespace SMART Log: No 00:14:53.291 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.291 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:53.291 Command Effects Log Page: Supported 00:14:53.291 Get Log Page Extended Data: Supported 00:14:53.291 Telemetry Log Pages: Not Supported 00:14:53.291 Persistent Event Log Pages: Not Supported 00:14:53.291 Supported Log Pages Log Page: May Support 00:14:53.291 Commands Supported & Effects Log Page: Not Supported 00:14:53.291 Feature Identifiers & Effects Log Page:May Support 00:14:53.291 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.291 Data Area 4 for Telemetry Log: Not Supported 00:14:53.291 Error Log Page Entries Supported: 128 00:14:53.291 Keep Alive: Supported 00:14:53.291 Keep Alive Granularity: 10000 ms 00:14:53.291 00:14:53.291 NVM Command Set Attributes 00:14:53.291 ========================== 00:14:53.291 Submission Queue Entry Size 00:14:53.291 Max: 64 00:14:53.291 Min: 64 00:14:53.291 Completion Queue Entry Size 00:14:53.291 Max: 16 00:14:53.291 Min: 16 00:14:53.291 Number of Namespaces: 32 00:14:53.291 Compare Command: Supported 00:14:53.291 Write Uncorrectable Command: Not Supported 00:14:53.291 Dataset Management Command: Supported 00:14:53.291 Write Zeroes Command: Supported 00:14:53.291 Set Features Save Field: Not Supported 00:14:53.291 Reservations: Not Supported 00:14:53.291 Timestamp: Not Supported 00:14:53.291 Copy: Supported 00:14:53.291 Volatile Write Cache: Present 00:14:53.291 Atomic Write Unit (Normal): 1 00:14:53.291 Atomic Write Unit (PFail): 1 00:14:53.291 Atomic Compare & Write Unit: 1 00:14:53.291 Fused Compare & Write: Supported 00:14:53.291 Scatter-Gather List 00:14:53.291 SGL Command Set: Supported (Dword aligned) 00:14:53.291 SGL Keyed: Not Supported 00:14:53.291 SGL Bit Bucket Descriptor: Not Supported 00:14:53.291 SGL Metadata Pointer: Not Supported 00:14:53.291 Oversized SGL: Not Supported 00:14:53.291 SGL Metadata Address: Not Supported 00:14:53.291 SGL Offset: Not Supported 00:14:53.291 Transport SGL Data Block: Not Supported 00:14:53.291 Replay Protected Memory Block: Not Supported 00:14:53.291 00:14:53.291 Firmware Slot Information 00:14:53.291 ========================= 00:14:53.291 Active slot: 1 00:14:53.291 Slot 1 Firmware Revision: 24.05.1 00:14:53.291 00:14:53.291 00:14:53.291 Commands Supported and Effects 00:14:53.291 ============================== 00:14:53.291 Admin Commands 00:14:53.291 -------------- 00:14:53.291 Get Log Page (02h): Supported 00:14:53.291 Identify (06h): Supported 00:14:53.291 Abort (08h): Supported 00:14:53.291 Set Features (09h): Supported 00:14:53.291 Get Features (0Ah): Supported 00:14:53.291 Asynchronous Event Request (0Ch): Supported 00:14:53.291 Keep Alive (18h): Supported 00:14:53.291 I/O Commands 00:14:53.291 ------------ 00:14:53.291 Flush (00h): Supported LBA-Change 00:14:53.291 Write (01h): Supported LBA-Change 00:14:53.291 Read (02h): Supported 00:14:53.291 Compare (05h): Supported 00:14:53.291 Write Zeroes (08h): Supported LBA-Change 00:14:53.291 Dataset Management (09h): Supported LBA-Change 00:14:53.291 Copy (19h): Supported LBA-Change 00:14:53.291 Unknown (79h): Supported LBA-Change 00:14:53.291 Unknown (7Ah): Supported 00:14:53.291 00:14:53.291 Error Log 00:14:53.291 ========= 00:14:53.291 00:14:53.291 Arbitration 00:14:53.291 =========== 00:14:53.291 Arbitration Burst: 1 00:14:53.291 00:14:53.291 Power Management 00:14:53.291 ================ 00:14:53.291 Number of Power States: 1 00:14:53.291 Current Power State: Power State #0 00:14:53.291 Power State #0: 00:14:53.291 Max Power: 0.00 W 00:14:53.291 Non-Operational State: Operational 00:14:53.291 Entry Latency: Not Reported 00:14:53.291 Exit Latency: Not Reported 00:14:53.291 Relative Read Throughput: 0 00:14:53.291 Relative Read Latency: 0 00:14:53.291 Relative Write Throughput: 0 00:14:53.291 Relative Write Latency: 0 00:14:53.291 Idle Power: Not Reported 00:14:53.291 Active Power: Not Reported 00:14:53.291 Non-Operational Permissive Mode: Not Supported 00:14:53.291 00:14:53.291 Health Information 00:14:53.291 ================== 00:14:53.291 Critical Warnings: 00:14:53.291 Available Spare Space: OK 00:14:53.291 Temperature: OK 00:14:53.291 Device Reliability: OK 00:14:53.291 Read Only: No 00:14:53.291 Volatile Memory Backup: OK 00:14:53.291 Current Temperature: 0 Kelvin[2024-07-14 04:31:13.354325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:53.291 [2024-07-14 04:31:13.354342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:53.291 [2024-07-14 04:31:13.354383] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:53.291 [2024-07-14 04:31:13.354401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.291 [2024-07-14 04:31:13.354412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.291 [2024-07-14 04:31:13.354421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.291 [2024-07-14 04:31:13.354431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.291 [2024-07-14 04:31:13.354900] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:53.291 [2024-07-14 04:31:13.354922] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:53.291 [2024-07-14 04:31:13.355906] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.291 [2024-07-14 04:31:13.355980] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:53.291 [2024-07-14 04:31:13.355994] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:53.291 [2024-07-14 04:31:13.356913] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:53.291 [2024-07-14 04:31:13.356936] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:53.291 [2024-07-14 04:31:13.356992] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:53.291 [2024-07-14 04:31:13.358952] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:53.291 (-273 Celsius) 00:14:53.291 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:53.291 Available Spare: 0% 00:14:53.291 Available Spare Threshold: 0% 00:14:53.291 Life Percentage Used: 0% 00:14:53.291 Data Units Read: 0 00:14:53.291 Data Units Written: 0 00:14:53.291 Host Read Commands: 0 00:14:53.291 Host Write Commands: 0 00:14:53.291 Controller Busy Time: 0 minutes 00:14:53.291 Power Cycles: 0 00:14:53.291 Power On Hours: 0 hours 00:14:53.291 Unsafe Shutdowns: 0 00:14:53.291 Unrecoverable Media Errors: 0 00:14:53.291 Lifetime Error Log Entries: 0 00:14:53.291 Warning Temperature Time: 0 minutes 00:14:53.291 Critical Temperature Time: 0 minutes 00:14:53.291 00:14:53.291 Number of Queues 00:14:53.291 ================ 00:14:53.291 Number of I/O Submission Queues: 127 00:14:53.291 Number of I/O Completion Queues: 127 00:14:53.291 00:14:53.291 Active Namespaces 00:14:53.291 ================= 00:14:53.291 Namespace ID:1 00:14:53.291 Error Recovery Timeout: Unlimited 00:14:53.291 Command Set Identifier: NVM (00h) 00:14:53.291 Deallocate: Supported 00:14:53.291 Deallocated/Unwritten Error: Not Supported 00:14:53.291 Deallocated Read Value: Unknown 00:14:53.291 Deallocate in Write Zeroes: Not Supported 00:14:53.291 Deallocated Guard Field: 0xFFFF 00:14:53.291 Flush: Supported 00:14:53.291 Reservation: Supported 00:14:53.291 Namespace Sharing Capabilities: Multiple Controllers 00:14:53.291 Size (in LBAs): 131072 (0GiB) 00:14:53.291 Capacity (in LBAs): 131072 (0GiB) 00:14:53.291 Utilization (in LBAs): 131072 (0GiB) 00:14:53.291 NGUID: 1FED7E75FEE94DAAAA4C829597A55C96 00:14:53.291 UUID: 1fed7e75-fee9-4daa-aa4c-829597a55c96 00:14:53.291 Thin Provisioning: Not Supported 00:14:53.291 Per-NS Atomic Units: Yes 00:14:53.291 Atomic Boundary Size (Normal): 0 00:14:53.291 Atomic Boundary Size (PFail): 0 00:14:53.291 Atomic Boundary Offset: 0 00:14:53.291 Maximum Single Source Range Length: 65535 00:14:53.291 Maximum Copy Length: 65535 00:14:53.291 Maximum Source Range Count: 1 00:14:53.291 NGUID/EUI64 Never Reused: No 00:14:53.291 Namespace Write Protected: No 00:14:53.291 Number of LBA Formats: 1 00:14:53.291 Current LBA Format: LBA Format #00 00:14:53.291 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.291 00:14:53.291 04:31:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:53.291 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.550 [2024-07-14 04:31:13.591683] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:58.848 Initializing NVMe Controllers 00:14:58.848 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:58.848 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:58.848 Initialization complete. Launching workers. 00:14:58.848 ======================================================== 00:14:58.848 Latency(us) 00:14:58.848 Device Information : IOPS MiB/s Average min max 00:14:58.848 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36109.60 141.05 3544.59 1161.44 7549.61 00:14:58.848 ======================================================== 00:14:58.848 Total : 36109.60 141.05 3544.59 1161.44 7549.61 00:14:58.848 00:14:58.848 [2024-07-14 04:31:18.613583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:58.848 04:31:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:58.848 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.848 [2024-07-14 04:31:18.854736] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.157 Initializing NVMe Controllers 00:15:04.157 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.157 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:04.157 Initialization complete. Launching workers. 00:15:04.157 ======================================================== 00:15:04.157 Latency(us) 00:15:04.157 Device Information : IOPS MiB/s Average min max 00:15:04.157 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16037.40 62.65 7988.08 6968.36 15612.60 00:15:04.157 ======================================================== 00:15:04.157 Total : 16037.40 62.65 7988.08 6968.36 15612.60 00:15:04.157 00:15:04.157 [2024-07-14 04:31:23.886593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.157 04:31:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:04.157 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.157 [2024-07-14 04:31:24.109656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.438 [2024-07-14 04:31:29.185217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.438 Initializing NVMe Controllers 00:15:09.438 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.438 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.438 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:09.438 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:09.438 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:09.438 Initialization complete. Launching workers. 00:15:09.438 Starting thread on core 2 00:15:09.438 Starting thread on core 3 00:15:09.438 Starting thread on core 1 00:15:09.438 04:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:09.438 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.439 [2024-07-14 04:31:29.489733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.737 [2024-07-14 04:31:32.759185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.737 Initializing NVMe Controllers 00:15:12.737 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.737 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.737 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:12.737 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:12.737 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:12.738 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:12.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:12.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:12.738 Initialization complete. Launching workers. 00:15:12.738 Starting thread on core 1 with urgent priority queue 00:15:12.738 Starting thread on core 2 with urgent priority queue 00:15:12.738 Starting thread on core 3 with urgent priority queue 00:15:12.738 Starting thread on core 0 with urgent priority queue 00:15:12.738 SPDK bdev Controller (SPDK1 ) core 0: 2957.67 IO/s 33.81 secs/100000 ios 00:15:12.738 SPDK bdev Controller (SPDK1 ) core 1: 3177.67 IO/s 31.47 secs/100000 ios 00:15:12.738 SPDK bdev Controller (SPDK1 ) core 2: 3569.67 IO/s 28.01 secs/100000 ios 00:15:12.738 SPDK bdev Controller (SPDK1 ) core 3: 3218.33 IO/s 31.07 secs/100000 ios 00:15:12.738 ======================================================== 00:15:12.738 00:15:12.738 04:31:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:12.738 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.996 [2024-07-14 04:31:33.059407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.996 Initializing NVMe Controllers 00:15:12.996 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.996 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.996 Namespace ID: 1 size: 0GB 00:15:12.996 Initialization complete. 00:15:12.996 INFO: using host memory buffer for IO 00:15:12.996 Hello world! 00:15:12.996 [2024-07-14 04:31:33.093012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.996 04:31:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:12.996 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.254 [2024-07-14 04:31:33.385397] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:14.633 Initializing NVMe Controllers 00:15:14.633 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.633 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.633 Initialization complete. Launching workers. 00:15:14.633 submit (in ns) avg, min, max = 10070.0, 3541.1, 4023256.7 00:15:14.633 complete (in ns) avg, min, max = 27011.4, 2077.8, 4019624.4 00:15:14.633 00:15:14.633 Submit histogram 00:15:14.633 ================ 00:15:14.633 Range in us Cumulative Count 00:15:14.633 3.532 - 3.556: 0.1132% ( 15) 00:15:14.633 3.556 - 3.579: 0.4755% ( 48) 00:15:14.633 3.579 - 3.603: 1.6153% ( 151) 00:15:14.633 3.603 - 3.627: 4.2950% ( 355) 00:15:14.633 3.627 - 3.650: 10.9300% ( 879) 00:15:14.633 3.650 - 3.674: 19.1350% ( 1087) 00:15:14.633 3.674 - 3.698: 29.0459% ( 1313) 00:15:14.633 3.698 - 3.721: 38.8738% ( 1302) 00:15:14.633 3.721 - 3.745: 46.5504% ( 1017) 00:15:14.633 3.745 - 3.769: 51.6153% ( 671) 00:15:14.633 3.769 - 3.793: 55.6914% ( 540) 00:15:14.633 3.793 - 3.816: 59.5184% ( 507) 00:15:14.633 3.816 - 3.840: 62.8321% ( 439) 00:15:14.633 3.840 - 3.864: 66.3421% ( 465) 00:15:14.633 3.864 - 3.887: 69.5124% ( 420) 00:15:14.633 3.887 - 3.911: 73.3469% ( 508) 00:15:14.633 3.911 - 3.935: 78.1024% ( 630) 00:15:14.633 3.935 - 3.959: 81.7104% ( 478) 00:15:14.633 3.959 - 3.982: 84.4203% ( 359) 00:15:14.633 3.982 - 4.006: 86.6018% ( 289) 00:15:14.633 4.006 - 4.030: 88.1718% ( 208) 00:15:14.633 4.030 - 4.053: 89.5229% ( 179) 00:15:14.633 4.053 - 4.077: 90.6703% ( 152) 00:15:14.633 4.077 - 4.101: 91.5685% ( 119) 00:15:14.633 4.101 - 4.124: 92.3762% ( 107) 00:15:14.633 4.124 - 4.148: 93.1537% ( 103) 00:15:14.633 4.148 - 4.172: 93.8481% ( 92) 00:15:14.633 4.172 - 4.196: 94.3992% ( 73) 00:15:14.633 4.196 - 4.219: 94.8068% ( 54) 00:15:14.633 4.219 - 4.243: 95.1087% ( 40) 00:15:14.633 4.243 - 4.267: 95.3653% ( 34) 00:15:14.633 4.267 - 4.290: 95.6220% ( 34) 00:15:14.633 4.290 - 4.314: 95.8107% ( 25) 00:15:14.633 4.314 - 4.338: 96.0069% ( 26) 00:15:14.633 4.338 - 4.361: 96.1428% ( 18) 00:15:14.633 4.361 - 4.385: 96.2485% ( 14) 00:15:14.633 4.385 - 4.409: 96.3617% ( 15) 00:15:14.633 4.409 - 4.433: 96.4598% ( 13) 00:15:14.633 4.433 - 4.456: 96.4976% ( 5) 00:15:14.633 4.456 - 4.480: 96.5806% ( 11) 00:15:14.633 4.480 - 4.504: 96.6184% ( 5) 00:15:14.633 4.504 - 4.527: 96.6335% ( 2) 00:15:14.633 4.527 - 4.551: 96.6787% ( 6) 00:15:14.633 4.551 - 4.575: 96.7089% ( 4) 00:15:14.633 4.575 - 4.599: 96.7316% ( 3) 00:15:14.633 4.599 - 4.622: 96.7391% ( 1) 00:15:14.633 4.646 - 4.670: 96.7542% ( 2) 00:15:14.633 4.670 - 4.693: 96.7995% ( 6) 00:15:14.633 4.693 - 4.717: 96.8146% ( 2) 00:15:14.633 4.717 - 4.741: 96.9052% ( 12) 00:15:14.633 4.741 - 4.764: 96.9354% ( 4) 00:15:14.633 4.764 - 4.788: 96.9807% ( 6) 00:15:14.633 4.788 - 4.812: 97.0260% ( 6) 00:15:14.633 4.812 - 4.836: 97.0562% ( 4) 00:15:14.633 4.836 - 4.859: 97.1543% ( 13) 00:15:14.633 4.859 - 4.883: 97.1845% ( 4) 00:15:14.634 4.883 - 4.907: 97.2373% ( 7) 00:15:14.634 4.907 - 4.930: 97.3053% ( 9) 00:15:14.634 4.930 - 4.954: 97.3505% ( 6) 00:15:14.634 4.954 - 4.978: 97.3807% ( 4) 00:15:14.634 4.978 - 5.001: 97.4185% ( 5) 00:15:14.634 5.001 - 5.025: 97.4411% ( 3) 00:15:14.634 5.025 - 5.049: 97.4562% ( 2) 00:15:14.634 5.049 - 5.073: 97.4638% ( 1) 00:15:14.634 5.073 - 5.096: 97.5091% ( 6) 00:15:14.634 5.096 - 5.120: 97.5166% ( 1) 00:15:14.634 5.120 - 5.144: 97.5317% ( 2) 00:15:14.634 5.144 - 5.167: 97.5543% ( 3) 00:15:14.634 5.167 - 5.191: 97.5770% ( 3) 00:15:14.634 5.191 - 5.215: 97.5996% ( 3) 00:15:14.634 5.215 - 5.239: 97.6223% ( 3) 00:15:14.634 5.239 - 5.262: 97.6374% ( 2) 00:15:14.634 5.262 - 5.286: 97.6751% ( 5) 00:15:14.634 5.310 - 5.333: 97.6827% ( 1) 00:15:14.634 5.333 - 5.357: 97.7129% ( 4) 00:15:14.634 5.357 - 5.381: 97.7204% ( 1) 00:15:14.634 5.381 - 5.404: 97.7355% ( 2) 00:15:14.634 5.404 - 5.428: 97.7582% ( 3) 00:15:14.634 5.428 - 5.452: 97.7732% ( 2) 00:15:14.634 5.499 - 5.523: 97.7808% ( 1) 00:15:14.634 5.547 - 5.570: 97.7883% ( 1) 00:15:14.634 5.570 - 5.594: 97.7959% ( 1) 00:15:14.634 5.594 - 5.618: 97.8185% ( 3) 00:15:14.634 5.618 - 5.641: 97.8261% ( 1) 00:15:14.634 5.665 - 5.689: 97.8487% ( 3) 00:15:14.634 5.689 - 5.713: 97.8563% ( 1) 00:15:14.634 5.713 - 5.736: 97.8714% ( 2) 00:15:14.634 5.760 - 5.784: 97.8789% ( 1) 00:15:14.634 5.807 - 5.831: 97.8865% ( 1) 00:15:14.634 5.831 - 5.855: 97.9091% ( 3) 00:15:14.634 5.879 - 5.902: 97.9167% ( 1) 00:15:14.634 5.973 - 5.997: 97.9318% ( 2) 00:15:14.634 6.021 - 6.044: 97.9469% ( 2) 00:15:14.634 6.044 - 6.068: 97.9544% ( 1) 00:15:14.634 6.068 - 6.116: 97.9620% ( 1) 00:15:14.634 6.116 - 6.163: 97.9846% ( 3) 00:15:14.634 6.210 - 6.258: 97.9921% ( 1) 00:15:14.634 6.305 - 6.353: 98.0148% ( 3) 00:15:14.634 6.542 - 6.590: 98.0223% ( 1) 00:15:14.634 6.684 - 6.732: 98.0299% ( 1) 00:15:14.634 6.827 - 6.874: 98.0374% ( 1) 00:15:14.634 6.921 - 6.969: 98.0450% ( 1) 00:15:14.634 6.969 - 7.016: 98.0525% ( 1) 00:15:14.634 7.064 - 7.111: 98.0601% ( 1) 00:15:14.634 7.159 - 7.206: 98.0676% ( 1) 00:15:14.634 7.206 - 7.253: 98.0752% ( 1) 00:15:14.634 7.253 - 7.301: 98.0827% ( 1) 00:15:14.634 7.301 - 7.348: 98.0903% ( 1) 00:15:14.634 7.348 - 7.396: 98.0978% ( 1) 00:15:14.634 7.396 - 7.443: 98.1054% ( 1) 00:15:14.634 7.443 - 7.490: 98.1205% ( 2) 00:15:14.634 7.490 - 7.538: 98.1280% ( 1) 00:15:14.634 7.585 - 7.633: 98.1507% ( 3) 00:15:14.634 7.633 - 7.680: 98.1582% ( 1) 00:15:14.634 7.680 - 7.727: 98.1809% ( 3) 00:15:14.634 7.822 - 7.870: 98.1884% ( 1) 00:15:14.634 7.917 - 7.964: 98.1960% ( 1) 00:15:14.634 7.964 - 8.012: 98.2261% ( 4) 00:15:14.634 8.012 - 8.059: 98.2412% ( 2) 00:15:14.634 8.107 - 8.154: 98.2488% ( 1) 00:15:14.634 8.154 - 8.201: 98.2563% ( 1) 00:15:14.634 8.201 - 8.249: 98.2865% ( 4) 00:15:14.634 8.249 - 8.296: 98.2941% ( 1) 00:15:14.634 8.296 - 8.344: 98.3092% ( 2) 00:15:14.634 8.391 - 8.439: 98.3243% ( 2) 00:15:14.634 8.439 - 8.486: 98.3318% ( 1) 00:15:14.634 8.486 - 8.533: 98.3394% ( 1) 00:15:14.634 8.533 - 8.581: 98.3469% ( 1) 00:15:14.634 8.628 - 8.676: 98.3545% ( 1) 00:15:14.634 8.818 - 8.865: 98.3620% ( 1) 00:15:14.634 8.913 - 8.960: 98.3696% ( 1) 00:15:14.634 8.960 - 9.007: 98.3771% ( 1) 00:15:14.634 9.055 - 9.102: 98.3847% ( 1) 00:15:14.634 9.813 - 9.861: 98.3998% ( 2) 00:15:14.634 9.861 - 9.908: 98.4073% ( 1) 00:15:14.634 9.908 - 9.956: 98.4149% ( 1) 00:15:14.634 9.956 - 10.003: 98.4224% ( 1) 00:15:14.634 10.050 - 10.098: 98.4300% ( 1) 00:15:14.634 10.098 - 10.145: 98.4375% ( 1) 00:15:14.634 10.145 - 10.193: 98.4450% ( 1) 00:15:14.634 10.240 - 10.287: 98.4526% ( 1) 00:15:14.634 10.335 - 10.382: 98.4677% ( 2) 00:15:14.634 10.382 - 10.430: 98.4752% ( 1) 00:15:14.634 10.430 - 10.477: 98.4903% ( 2) 00:15:14.634 10.572 - 10.619: 98.4979% ( 1) 00:15:14.634 10.619 - 10.667: 98.5054% ( 1) 00:15:14.634 10.761 - 10.809: 98.5130% ( 1) 00:15:14.634 10.856 - 10.904: 98.5205% ( 1) 00:15:14.634 11.046 - 11.093: 98.5281% ( 1) 00:15:14.634 11.141 - 11.188: 98.5356% ( 1) 00:15:14.634 11.283 - 11.330: 98.5432% ( 1) 00:15:14.634 11.757 - 11.804: 98.5507% ( 1) 00:15:14.634 11.804 - 11.852: 98.5658% ( 2) 00:15:14.634 11.947 - 11.994: 98.5734% ( 1) 00:15:14.634 12.326 - 12.421: 98.5809% ( 1) 00:15:14.634 12.516 - 12.610: 98.6036% ( 3) 00:15:14.634 12.610 - 12.705: 98.6187% ( 2) 00:15:14.634 12.705 - 12.800: 98.6338% ( 2) 00:15:14.634 12.800 - 12.895: 98.6413% ( 1) 00:15:14.634 12.895 - 12.990: 98.6489% ( 1) 00:15:14.634 13.084 - 13.179: 98.6564% ( 1) 00:15:14.634 13.179 - 13.274: 98.6715% ( 2) 00:15:14.634 13.369 - 13.464: 98.6790% ( 1) 00:15:14.634 13.464 - 13.559: 98.6866% ( 1) 00:15:14.634 13.559 - 13.653: 98.6941% ( 1) 00:15:14.634 13.748 - 13.843: 98.7092% ( 2) 00:15:14.634 13.843 - 13.938: 98.7168% ( 1) 00:15:14.634 14.127 - 14.222: 98.7243% ( 1) 00:15:14.634 14.222 - 14.317: 98.7470% ( 3) 00:15:14.634 14.412 - 14.507: 98.7545% ( 1) 00:15:14.634 14.601 - 14.696: 98.7621% ( 1) 00:15:14.634 14.886 - 14.981: 98.7696% ( 1) 00:15:14.634 15.076 - 15.170: 98.7772% ( 1) 00:15:14.634 15.360 - 15.455: 98.7847% ( 1) 00:15:14.634 15.739 - 15.834: 98.7923% ( 1) 00:15:14.634 16.403 - 16.498: 98.7998% ( 1) 00:15:14.634 16.972 - 17.067: 98.8074% ( 1) 00:15:14.634 17.161 - 17.256: 98.8225% ( 2) 00:15:14.634 17.256 - 17.351: 98.8300% ( 1) 00:15:14.634 17.446 - 17.541: 98.8376% ( 1) 00:15:14.634 17.541 - 17.636: 98.8829% ( 6) 00:15:14.634 17.636 - 17.730: 98.9206% ( 5) 00:15:14.634 17.730 - 17.825: 98.9508% ( 4) 00:15:14.634 17.825 - 17.920: 98.9810% ( 4) 00:15:14.634 17.920 - 18.015: 99.0263% ( 6) 00:15:14.634 18.015 - 18.110: 99.1018% ( 10) 00:15:14.634 18.110 - 18.204: 99.1923% ( 12) 00:15:14.634 18.204 - 18.299: 99.2980% ( 14) 00:15:14.634 18.299 - 18.394: 99.3735% ( 10) 00:15:14.634 18.394 - 18.489: 99.4339% ( 8) 00:15:14.634 18.489 - 18.584: 99.4943% ( 8) 00:15:14.634 18.584 - 18.679: 99.5320% ( 5) 00:15:14.634 18.679 - 18.773: 99.5773% ( 6) 00:15:14.634 18.773 - 18.868: 99.6150% ( 5) 00:15:14.634 18.868 - 18.963: 99.6603% ( 6) 00:15:14.634 18.963 - 19.058: 99.6754% ( 2) 00:15:14.634 19.058 - 19.153: 99.6830% ( 1) 00:15:14.634 19.153 - 19.247: 99.7056% ( 3) 00:15:14.634 19.247 - 19.342: 99.7207% ( 2) 00:15:14.634 19.342 - 19.437: 99.7283% ( 1) 00:15:14.634 19.437 - 19.532: 99.7358% ( 1) 00:15:14.634 19.627 - 19.721: 99.7434% ( 1) 00:15:14.634 19.911 - 20.006: 99.7585% ( 2) 00:15:14.634 20.101 - 20.196: 99.7660% ( 1) 00:15:14.634 20.954 - 21.049: 99.7736% ( 1) 00:15:14.634 23.040 - 23.135: 99.7811% ( 1) 00:15:14.634 23.135 - 23.230: 99.7886% ( 1) 00:15:14.634 23.799 - 23.893: 99.7962% ( 1) 00:15:14.634 24.178 - 24.273: 99.8037% ( 1) 00:15:14.634 26.169 - 26.359: 99.8113% ( 1) 00:15:14.634 26.548 - 26.738: 99.8188% ( 1) 00:15:14.634 28.065 - 28.255: 99.8264% ( 1) 00:15:14.634 28.824 - 29.013: 99.8339% ( 1) 00:15:14.634 29.013 - 29.203: 99.8415% ( 1) 00:15:14.634 35.650 - 35.840: 99.8490% ( 1) 00:15:14.634 3980.705 - 4004.978: 99.9698% ( 16) 00:15:14.634 4004.978 - 4029.250: 100.0000% ( 4) 00:15:14.634 00:15:14.634 Complete histogram 00:15:14.634 ================== 00:15:14.634 Range in us Cumulative Count 00:15:14.634 2.074 - 2.086: 8.2126% ( 1088) 00:15:14.634 2.086 - 2.098: 35.1902% ( 3574) 00:15:14.634 2.098 - 2.110: 38.9795% ( 502) 00:15:14.634 2.110 - 2.121: 48.4828% ( 1259) 00:15:14.634 2.121 - 2.133: 57.3521% ( 1175) 00:15:14.634 2.133 - 2.145: 59.0278% ( 222) 00:15:14.634 2.145 - 2.157: 67.1120% ( 1071) 00:15:14.634 2.157 - 2.169: 73.7394% ( 878) 00:15:14.634 2.169 - 2.181: 74.9170% ( 156) 00:15:14.634 2.181 - 2.193: 78.8647% ( 523) 00:15:14.634 2.193 - 2.204: 81.6576% ( 370) 00:15:14.634 2.204 - 2.216: 82.2011% ( 72) 00:15:14.634 2.216 - 2.228: 85.0694% ( 380) 00:15:14.634 2.228 - 2.240: 87.6510% ( 342) 00:15:14.634 2.240 - 2.252: 89.7343% ( 276) 00:15:14.634 2.252 - 2.264: 91.9460% ( 293) 00:15:14.634 2.264 - 2.276: 93.0254% ( 143) 00:15:14.634 2.276 - 2.287: 93.3575% ( 44) 00:15:14.634 2.287 - 2.299: 93.6896% ( 44) 00:15:14.634 2.299 - 2.311: 93.9463% ( 34) 00:15:14.634 2.311 - 2.323: 94.5577% ( 81) 00:15:14.634 2.323 - 2.335: 95.1464% ( 78) 00:15:14.634 2.335 - 2.347: 95.2521% ( 14) 00:15:14.634 2.347 - 2.359: 95.3050% ( 7) 00:15:14.634 2.359 - 2.370: 95.3351% ( 4) 00:15:14.634 2.370 - 2.382: 95.4031% ( 9) 00:15:14.634 2.382 - 2.394: 95.5163% ( 15) 00:15:14.634 2.394 - 2.406: 95.8711% ( 47) 00:15:14.634 2.406 - 2.418: 96.0900% ( 29) 00:15:14.634 2.418 - 2.430: 96.4146% ( 43) 00:15:14.634 2.430 - 2.441: 96.6335% ( 29) 00:15:14.634 2.441 - 2.453: 96.8448% ( 28) 00:15:14.634 2.453 - 2.465: 97.0109% ( 22) 00:15:14.634 2.465 - 2.477: 97.1467% ( 18) 00:15:14.635 2.477 - 2.489: 97.2373% ( 12) 00:15:14.635 2.489 - 2.501: 97.3958% ( 21) 00:15:14.635 2.501 - 2.513: 97.5468% ( 20) 00:15:14.635 2.513 - 2.524: 97.6525% ( 14) 00:15:14.635 2.524 - 2.536: 97.7732% ( 16) 00:15:14.635 2.536 - 2.548: 97.8487% ( 10) 00:15:14.635 2.548 - 2.560: 97.9242% ( 10) 00:15:14.635 2.560 - 2.572: 97.9771% ( 7) 00:15:14.635 2.572 - 2.584: 98.0374% ( 8) 00:15:14.635 2.584 - 2.596: 98.0450% ( 1) 00:15:14.635 2.596 - 2.607: 98.0601% ( 2) 00:15:14.635 2.607 - 2.619: 98.0903% ( 4) 00:15:14.635 2.631 - 2.643: 98.0978% ( 1) 00:15:14.635 2.655 - 2.667: 98.1129% ( 2) 00:15:14.635 2.667 - 2.679: 98.1205% ( 1) 00:15:14.635 2.679 - 2.690: 98.1356% ( 2) 00:15:14.635 2.690 - 2.702: 98.1431% ( 1) 00:15:14.635 2.702 - 2.714: 98.1507% ( 1) 00:15:14.635 2.714 - 2.726: 98.1582% ( 1) 00:15:14.635 2.726 - 2.738: 98.1658% ( 1) 00:15:14.635 2.761 - 2.773: 98.1733% ( 1) 00:15:14.635 2.773 - 2.785: 98.1809% ( 1) 00:15:14.635 2.797 - 2.809: 98.1884% ( 1) 00:15:14.635 2.821 - 2.833: 98.1960% ( 1) 00:15:14.635 2.856 - 2.868: 98.2035% ( 1) 00:15:14.635 2.880 - 2.892: 98.2111% ( 1) 00:15:14.635 2.904 - 2.916: 98.2261% ( 2) 00:15:14.635 2.916 - 2.927: 98.2337% ( 1) 00:15:14.635 2.927 - 2.939: 98.2412% ( 1) 00:15:14.635 2.963 - 2.975: 98.2488% ( 1) 00:15:14.635 2.987 - 2.999: 98.2563% ( 1) 00:15:14.635 2.999 - 3.010: 98.2714% ( 2) 00:15:14.635 3.022 - 3.034: 98.2790% ( 1) 00:15:14.635 3.034 - 3.058: 98.2941% ( 2) 00:15:14.635 3.058 - 3.081: 98.3016% ( 1) 00:15:14.635 3.081 - 3.105: 98.3318% ( 4) 00:15:14.635 3.105 - 3.129: 98.3394% ( 1) 00:15:14.635 3.129 - 3.153: 98.3545% ( 2) 00:15:14.635 3.153 - 3.176: 98.3620% ( 1) 00:15:14.635 3.176 - 3.200: 98.3696% ( 1) 00:15:14.635 3.200 - 3.224: 98.3771% ( 1) 00:15:14.635 3.224 - 3.247: 98.3847% ( 1) 00:15:14.635 3.247 - 3.271: 98.3922% ( 1) 00:15:14.635 3.295 - 3.319: 98.4149% ( 3) 00:15:14.635 3.319 - 3.342: 98.4375% ( 3) 00:15:14.635 3.342 - 3.366: 98.4526% ( 2) 00:15:14.635 3.366 - 3.390: 98.4677% ( 2) 00:15:14.635 3.390 - 3.413: 98.5054% ( 5) 00:15:14.635 3.413 - 3.437: 98.5130% ( 1) 00:15:14.635 3.461 - 3.484: 98.5281% ( 2) 00:15:14.635 3.484 - 3.508: 98.5432% ( 2) 00:15:14.635 3.508 - 3.532: 98.5583% ( 2) 00:15:14.635 3.532 - 3.556: 98.5658% ( 1) 00:15:14.635 3.556 - 3.579: 98.5734% ( 1) 00:15:14.635 3.579 - 3.603: 98.5809% ( 1) 00:15:14.635 3.603 - 3.627: 98.6111% ( 4) 00:15:14.635 3.627 - 3.650: 98.6187% ( 1) 00:15:14.635 3.650 - 3.674: 98.6262% ( 1) 00:15:14.635 3.721 - 3.745: 98.6338% ( 1) 00:15:14.635 3.745 - 3.769: 98.6489% ( 2) 00:15:14.635 3.769 - 3.793: 98.6639% ( 2) 00:15:14.635 3.793 - 3.816: 98.6715% ( 1) 00:15:14.635 3.816 - 3.840: 98.6790% ( 1) 00:15:14.635 3.840 - 3.864: 98.6866% ( 1) 00:15:14.635 3.982 - 4.006: 98.6941% ( 1) 00:15:14.635 5.333 - 5.357: 98.7017% ( 1) 00:15:14.635 5.594 - 5.618: 98.7092% ( 1) 00:15:14.635 5.618 - 5.641: 98.7168% ( 1) 00:15:14.635 5.665 - 5.689: 98.7243% ( 1) 00:15:14.635 5.973 - 5.997: 98.7319% ( 1) 00:15:14.635 6.044 - 6.068: 98.7394% ( 1) 00:15:14.635 6.353 - 6.400: 98.7470% ( 1) 00:15:14.635 6.827 - 6.874: 98.7545% ( 1) 00:15:14.635 6.874 - 6.921: 98.7696% ( 2) 00:15:14.635 7.253 - 7.301: 98.7772% ( 1) 00:15:14.635 8.012 - 8.059: 98.7847% ( 1) 00:15:14.635 9.007 - 9.055: 98.7923% ( 1) 00:15:14.635 15.455 - 15.550: 98.7998% ( 1) 00:15:14.635 15.550 - 15.644: 98.8149% ( 2) 00:15:14.635 15.644 - 15.739: 98.8300% ( 2) 00:15:14.635 15.739 - 15.834: 98.8451% ( 2) 00:15:14.635 15.834 - 15.929: 98.8678% ( 3) 00:15:14.635 15.929 - 16.024: 98.8904% ( 3) 00:15:14.635 16.024 - 16.119: 98.9357% ( 6) 00:15:14.635 16.119 - 16.213: 98.9583% ( 3) 00:15:14.635 16.213 - 16.308: 98.9961% ( 5) 00:15:14.635 16.308 - 16.403: 99.0414% ( 6) 00:15:14.635 16.403 - 16.498: 99.0942%[2024-07-14 04:31:34.407643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:14.635 ( 7) 00:15:14.635 16.498 - 16.593: 99.1697% ( 10) 00:15:14.635 16.593 - 16.687: 99.2074% ( 5) 00:15:14.635 16.687 - 16.782: 99.2603% ( 7) 00:15:14.635 16.782 - 16.877: 99.2829% ( 3) 00:15:14.635 17.067 - 17.161: 99.3056% ( 3) 00:15:14.635 17.161 - 17.256: 99.3207% ( 2) 00:15:14.635 17.256 - 17.351: 99.3282% ( 1) 00:15:14.635 17.825 - 17.920: 99.3357% ( 1) 00:15:14.635 18.015 - 18.110: 99.3433% ( 1) 00:15:14.635 18.394 - 18.489: 99.3508% ( 1) 00:15:14.635 18.773 - 18.868: 99.3584% ( 1) 00:15:14.635 21.333 - 21.428: 99.3659% ( 1) 00:15:14.635 22.566 - 22.661: 99.3735% ( 1) 00:15:14.635 23.893 - 23.988: 99.3810% ( 1) 00:15:14.635 3980.705 - 4004.978: 99.8339% ( 60) 00:15:14.635 4004.978 - 4029.250: 100.0000% ( 22) 00:15:14.635 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:14.635 [ 00:15:14.635 { 00:15:14.635 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:14.635 "subtype": "Discovery", 00:15:14.635 "listen_addresses": [], 00:15:14.635 "allow_any_host": true, 00:15:14.635 "hosts": [] 00:15:14.635 }, 00:15:14.635 { 00:15:14.635 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:14.635 "subtype": "NVMe", 00:15:14.635 "listen_addresses": [ 00:15:14.635 { 00:15:14.635 "trtype": "VFIOUSER", 00:15:14.635 "adrfam": "IPv4", 00:15:14.635 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:14.635 "trsvcid": "0" 00:15:14.635 } 00:15:14.635 ], 00:15:14.635 "allow_any_host": true, 00:15:14.635 "hosts": [], 00:15:14.635 "serial_number": "SPDK1", 00:15:14.635 "model_number": "SPDK bdev Controller", 00:15:14.635 "max_namespaces": 32, 00:15:14.635 "min_cntlid": 1, 00:15:14.635 "max_cntlid": 65519, 00:15:14.635 "namespaces": [ 00:15:14.635 { 00:15:14.635 "nsid": 1, 00:15:14.635 "bdev_name": "Malloc1", 00:15:14.635 "name": "Malloc1", 00:15:14.635 "nguid": "1FED7E75FEE94DAAAA4C829597A55C96", 00:15:14.635 "uuid": "1fed7e75-fee9-4daa-aa4c-829597a55c96" 00:15:14.635 } 00:15:14.635 ] 00:15:14.635 }, 00:15:14.635 { 00:15:14.635 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:14.635 "subtype": "NVMe", 00:15:14.635 "listen_addresses": [ 00:15:14.635 { 00:15:14.635 "trtype": "VFIOUSER", 00:15:14.635 "adrfam": "IPv4", 00:15:14.635 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:14.635 "trsvcid": "0" 00:15:14.635 } 00:15:14.635 ], 00:15:14.635 "allow_any_host": true, 00:15:14.635 "hosts": [], 00:15:14.635 "serial_number": "SPDK2", 00:15:14.635 "model_number": "SPDK bdev Controller", 00:15:14.635 "max_namespaces": 32, 00:15:14.635 "min_cntlid": 1, 00:15:14.635 "max_cntlid": 65519, 00:15:14.635 "namespaces": [ 00:15:14.635 { 00:15:14.635 "nsid": 1, 00:15:14.635 "bdev_name": "Malloc2", 00:15:14.635 "name": "Malloc2", 00:15:14.635 "nguid": "3E8E951425A74A478077D5D8BC590241", 00:15:14.635 "uuid": "3e8e9514-25a7-4a47-8077-d5d8bc590241" 00:15:14.635 } 00:15:14.635 ] 00:15:14.635 } 00:15:14.635 ] 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2755442 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:14.635 04:31:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:14.635 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.894 [2024-07-14 04:31:34.878376] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:14.894 Malloc3 00:15:14.894 04:31:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:15.151 [2024-07-14 04:31:35.241109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.151 04:31:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:15.151 Asynchronous Event Request test 00:15:15.151 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.151 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.151 Registering asynchronous event callbacks... 00:15:15.151 Starting namespace attribute notice tests for all controllers... 00:15:15.151 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:15.151 aer_cb - Changed Namespace 00:15:15.151 Cleaning up... 00:15:15.410 [ 00:15:15.410 { 00:15:15.410 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:15.410 "subtype": "Discovery", 00:15:15.410 "listen_addresses": [], 00:15:15.410 "allow_any_host": true, 00:15:15.410 "hosts": [] 00:15:15.410 }, 00:15:15.410 { 00:15:15.410 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:15.410 "subtype": "NVMe", 00:15:15.410 "listen_addresses": [ 00:15:15.410 { 00:15:15.410 "trtype": "VFIOUSER", 00:15:15.410 "adrfam": "IPv4", 00:15:15.410 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:15.410 "trsvcid": "0" 00:15:15.410 } 00:15:15.410 ], 00:15:15.410 "allow_any_host": true, 00:15:15.410 "hosts": [], 00:15:15.410 "serial_number": "SPDK1", 00:15:15.410 "model_number": "SPDK bdev Controller", 00:15:15.410 "max_namespaces": 32, 00:15:15.410 "min_cntlid": 1, 00:15:15.410 "max_cntlid": 65519, 00:15:15.410 "namespaces": [ 00:15:15.410 { 00:15:15.410 "nsid": 1, 00:15:15.410 "bdev_name": "Malloc1", 00:15:15.410 "name": "Malloc1", 00:15:15.410 "nguid": "1FED7E75FEE94DAAAA4C829597A55C96", 00:15:15.410 "uuid": "1fed7e75-fee9-4daa-aa4c-829597a55c96" 00:15:15.410 }, 00:15:15.410 { 00:15:15.410 "nsid": 2, 00:15:15.410 "bdev_name": "Malloc3", 00:15:15.410 "name": "Malloc3", 00:15:15.410 "nguid": "58BC7FD6775D458294FF1DA35DE6986E", 00:15:15.410 "uuid": "58bc7fd6-775d-4582-94ff-1da35de6986e" 00:15:15.410 } 00:15:15.410 ] 00:15:15.410 }, 00:15:15.410 { 00:15:15.410 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:15.410 "subtype": "NVMe", 00:15:15.410 "listen_addresses": [ 00:15:15.410 { 00:15:15.410 "trtype": "VFIOUSER", 00:15:15.410 "adrfam": "IPv4", 00:15:15.410 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:15.410 "trsvcid": "0" 00:15:15.410 } 00:15:15.410 ], 00:15:15.410 "allow_any_host": true, 00:15:15.410 "hosts": [], 00:15:15.410 "serial_number": "SPDK2", 00:15:15.410 "model_number": "SPDK bdev Controller", 00:15:15.410 "max_namespaces": 32, 00:15:15.410 "min_cntlid": 1, 00:15:15.410 "max_cntlid": 65519, 00:15:15.410 "namespaces": [ 00:15:15.410 { 00:15:15.410 "nsid": 1, 00:15:15.410 "bdev_name": "Malloc2", 00:15:15.410 "name": "Malloc2", 00:15:15.410 "nguid": "3E8E951425A74A478077D5D8BC590241", 00:15:15.410 "uuid": "3e8e9514-25a7-4a47-8077-d5d8bc590241" 00:15:15.410 } 00:15:15.410 ] 00:15:15.410 } 00:15:15.410 ] 00:15:15.410 04:31:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2755442 00:15:15.410 04:31:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.410 04:31:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:15.410 04:31:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:15.410 04:31:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:15.410 [2024-07-14 04:31:35.515783] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:15.410 [2024-07-14 04:31:35.515829] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755460 ] 00:15:15.410 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.410 [2024-07-14 04:31:35.549937] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:15.410 [2024-07-14 04:31:35.556175] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:15.410 [2024-07-14 04:31:35.556207] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe59db9b000 00:15:15.410 [2024-07-14 04:31:35.557195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.410 [2024-07-14 04:31:35.558203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.410 [2024-07-14 04:31:35.559213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.410 [2024-07-14 04:31:35.560202] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.410 [2024-07-14 04:31:35.561219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.410 [2024-07-14 04:31:35.562223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.410 [2024-07-14 04:31:35.563231] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.410 [2024-07-14 04:31:35.564252] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.410 [2024-07-14 04:31:35.565249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:15.410 [2024-07-14 04:31:35.565271] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe59c951000 00:15:15.410 [2024-07-14 04:31:35.566382] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:15.410 [2024-07-14 04:31:35.584531] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:15.410 [2024-07-14 04:31:35.584565] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:15.410 [2024-07-14 04:31:35.586671] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:15.410 [2024-07-14 04:31:35.586723] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:15.411 [2024-07-14 04:31:35.586810] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:15.411 [2024-07-14 04:31:35.586834] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:15.411 [2024-07-14 04:31:35.586844] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:15.411 [2024-07-14 04:31:35.587676] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:15.411 [2024-07-14 04:31:35.587702] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:15.411 [2024-07-14 04:31:35.587716] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:15.411 [2024-07-14 04:31:35.588687] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:15.411 [2024-07-14 04:31:35.588708] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:15.411 [2024-07-14 04:31:35.588723] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:15.411 [2024-07-14 04:31:35.589690] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:15.411 [2024-07-14 04:31:35.589710] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:15.411 [2024-07-14 04:31:35.590691] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:15.411 [2024-07-14 04:31:35.590716] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:15.411 [2024-07-14 04:31:35.590727] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:15.411 [2024-07-14 04:31:35.590740] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:15.411 [2024-07-14 04:31:35.590862] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:15.411 [2024-07-14 04:31:35.590881] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:15.411 [2024-07-14 04:31:35.590902] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:15.411 [2024-07-14 04:31:35.591696] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:15.411 [2024-07-14 04:31:35.592705] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:15.411 [2024-07-14 04:31:35.593716] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:15.411 [2024-07-14 04:31:35.594707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.411 [2024-07-14 04:31:35.594787] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:15.411 [2024-07-14 04:31:35.595723] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:15.411 [2024-07-14 04:31:35.595742] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:15.411 [2024-07-14 04:31:35.595751] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:15.411 [2024-07-14 04:31:35.595775] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:15.411 [2024-07-14 04:31:35.595788] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:15.411 [2024-07-14 04:31:35.595810] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:15.411 [2024-07-14 04:31:35.595820] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.411 [2024-07-14 04:31:35.595839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.672 [2024-07-14 04:31:35.603888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:15.672 [2024-07-14 04:31:35.603933] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:15.672 [2024-07-14 04:31:35.603946] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:15.672 [2024-07-14 04:31:35.603956] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:15.672 [2024-07-14 04:31:35.603964] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:15.672 [2024-07-14 04:31:35.603973] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:15.672 [2024-07-14 04:31:35.603986] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:15.672 [2024-07-14 04:31:35.603995] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.604008] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.604025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:15.672 [2024-07-14 04:31:35.611879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:15.672 [2024-07-14 04:31:35.611906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.672 [2024-07-14 04:31:35.611920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.672 [2024-07-14 04:31:35.611932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.672 [2024-07-14 04:31:35.611944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.672 [2024-07-14 04:31:35.611954] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.611971] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.611987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:15.672 [2024-07-14 04:31:35.619878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:15.672 [2024-07-14 04:31:35.619898] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:15.672 [2024-07-14 04:31:35.619933] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.619947] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.619963] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.619979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:15.672 [2024-07-14 04:31:35.627898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:15.672 [2024-07-14 04:31:35.627974] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.627991] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.628004] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:15.672 [2024-07-14 04:31:35.628013] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:15.672 [2024-07-14 04:31:35.628024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:15.672 [2024-07-14 04:31:35.635876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:15.672 [2024-07-14 04:31:35.635908] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:15.672 [2024-07-14 04:31:35.635924] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.635940] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.635953] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:15.672 [2024-07-14 04:31:35.635962] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.672 [2024-07-14 04:31:35.635973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.672 [2024-07-14 04:31:35.643890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:15.672 [2024-07-14 04:31:35.643920] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.643937] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.643951] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:15.672 [2024-07-14 04:31:35.643960] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.672 [2024-07-14 04:31:35.643970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.672 [2024-07-14 04:31:35.651891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:15.672 [2024-07-14 04:31:35.651913] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.651927] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:15.672 [2024-07-14 04:31:35.651942] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:15.673 [2024-07-14 04:31:35.651953] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:15.673 [2024-07-14 04:31:35.651963] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:15.673 [2024-07-14 04:31:35.651972] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:15.673 [2024-07-14 04:31:35.651980] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:15.673 [2024-07-14 04:31:35.651989] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:15.673 [2024-07-14 04:31:35.652016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:15.673 [2024-07-14 04:31:35.659894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:15.673 [2024-07-14 04:31:35.659919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:15.673 [2024-07-14 04:31:35.667879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:15.673 [2024-07-14 04:31:35.667909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:15.673 [2024-07-14 04:31:35.675876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:15.673 [2024-07-14 04:31:35.675902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:15.673 [2024-07-14 04:31:35.683880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:15.673 [2024-07-14 04:31:35.683918] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:15.673 [2024-07-14 04:31:35.683929] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:15.673 [2024-07-14 04:31:35.683936] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:15.673 [2024-07-14 04:31:35.683942] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:15.673 [2024-07-14 04:31:35.683953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:15.673 [2024-07-14 04:31:35.683965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:15.673 [2024-07-14 04:31:35.683974] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:15.673 [2024-07-14 04:31:35.683998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:15.673 [2024-07-14 04:31:35.684010] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:15.673 [2024-07-14 04:31:35.684018] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.673 [2024-07-14 04:31:35.684027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.673 [2024-07-14 04:31:35.684040] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:15.673 [2024-07-14 04:31:35.684048] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:15.673 [2024-07-14 04:31:35.684057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:15.673 [2024-07-14 04:31:35.691895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:15.673 [2024-07-14 04:31:35.691923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:15.673 [2024-07-14 04:31:35.691939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:15.673 [2024-07-14 04:31:35.691954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:15.673 ===================================================== 00:15:15.673 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:15.673 ===================================================== 00:15:15.673 Controller Capabilities/Features 00:15:15.673 ================================ 00:15:15.673 Vendor ID: 4e58 00:15:15.673 Subsystem Vendor ID: 4e58 00:15:15.673 Serial Number: SPDK2 00:15:15.673 Model Number: SPDK bdev Controller 00:15:15.673 Firmware Version: 24.05.1 00:15:15.673 Recommended Arb Burst: 6 00:15:15.673 IEEE OUI Identifier: 8d 6b 50 00:15:15.673 Multi-path I/O 00:15:15.673 May have multiple subsystem ports: Yes 00:15:15.673 May have multiple controllers: Yes 00:15:15.673 Associated with SR-IOV VF: No 00:15:15.673 Max Data Transfer Size: 131072 00:15:15.673 Max Number of Namespaces: 32 00:15:15.673 Max Number of I/O Queues: 127 00:15:15.673 NVMe Specification Version (VS): 1.3 00:15:15.673 NVMe Specification Version (Identify): 1.3 00:15:15.673 Maximum Queue Entries: 256 00:15:15.673 Contiguous Queues Required: Yes 00:15:15.673 Arbitration Mechanisms Supported 00:15:15.673 Weighted Round Robin: Not Supported 00:15:15.673 Vendor Specific: Not Supported 00:15:15.673 Reset Timeout: 15000 ms 00:15:15.673 Doorbell Stride: 4 bytes 00:15:15.673 NVM Subsystem Reset: Not Supported 00:15:15.673 Command Sets Supported 00:15:15.673 NVM Command Set: Supported 00:15:15.673 Boot Partition: Not Supported 00:15:15.673 Memory Page Size Minimum: 4096 bytes 00:15:15.673 Memory Page Size Maximum: 4096 bytes 00:15:15.673 Persistent Memory Region: Not Supported 00:15:15.673 Optional Asynchronous Events Supported 00:15:15.673 Namespace Attribute Notices: Supported 00:15:15.673 Firmware Activation Notices: Not Supported 00:15:15.673 ANA Change Notices: Not Supported 00:15:15.673 PLE Aggregate Log Change Notices: Not Supported 00:15:15.673 LBA Status Info Alert Notices: Not Supported 00:15:15.673 EGE Aggregate Log Change Notices: Not Supported 00:15:15.673 Normal NVM Subsystem Shutdown event: Not Supported 00:15:15.673 Zone Descriptor Change Notices: Not Supported 00:15:15.673 Discovery Log Change Notices: Not Supported 00:15:15.673 Controller Attributes 00:15:15.673 128-bit Host Identifier: Supported 00:15:15.673 Non-Operational Permissive Mode: Not Supported 00:15:15.673 NVM Sets: Not Supported 00:15:15.673 Read Recovery Levels: Not Supported 00:15:15.673 Endurance Groups: Not Supported 00:15:15.673 Predictable Latency Mode: Not Supported 00:15:15.673 Traffic Based Keep ALive: Not Supported 00:15:15.673 Namespace Granularity: Not Supported 00:15:15.673 SQ Associations: Not Supported 00:15:15.673 UUID List: Not Supported 00:15:15.673 Multi-Domain Subsystem: Not Supported 00:15:15.673 Fixed Capacity Management: Not Supported 00:15:15.673 Variable Capacity Management: Not Supported 00:15:15.673 Delete Endurance Group: Not Supported 00:15:15.673 Delete NVM Set: Not Supported 00:15:15.673 Extended LBA Formats Supported: Not Supported 00:15:15.673 Flexible Data Placement Supported: Not Supported 00:15:15.673 00:15:15.673 Controller Memory Buffer Support 00:15:15.673 ================================ 00:15:15.673 Supported: No 00:15:15.673 00:15:15.673 Persistent Memory Region Support 00:15:15.673 ================================ 00:15:15.673 Supported: No 00:15:15.673 00:15:15.673 Admin Command Set Attributes 00:15:15.673 ============================ 00:15:15.673 Security Send/Receive: Not Supported 00:15:15.673 Format NVM: Not Supported 00:15:15.673 Firmware Activate/Download: Not Supported 00:15:15.673 Namespace Management: Not Supported 00:15:15.673 Device Self-Test: Not Supported 00:15:15.673 Directives: Not Supported 00:15:15.673 NVMe-MI: Not Supported 00:15:15.673 Virtualization Management: Not Supported 00:15:15.673 Doorbell Buffer Config: Not Supported 00:15:15.673 Get LBA Status Capability: Not Supported 00:15:15.673 Command & Feature Lockdown Capability: Not Supported 00:15:15.673 Abort Command Limit: 4 00:15:15.673 Async Event Request Limit: 4 00:15:15.673 Number of Firmware Slots: N/A 00:15:15.673 Firmware Slot 1 Read-Only: N/A 00:15:15.673 Firmware Activation Without Reset: N/A 00:15:15.673 Multiple Update Detection Support: N/A 00:15:15.673 Firmware Update Granularity: No Information Provided 00:15:15.673 Per-Namespace SMART Log: No 00:15:15.673 Asymmetric Namespace Access Log Page: Not Supported 00:15:15.673 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:15.673 Command Effects Log Page: Supported 00:15:15.673 Get Log Page Extended Data: Supported 00:15:15.673 Telemetry Log Pages: Not Supported 00:15:15.673 Persistent Event Log Pages: Not Supported 00:15:15.673 Supported Log Pages Log Page: May Support 00:15:15.673 Commands Supported & Effects Log Page: Not Supported 00:15:15.673 Feature Identifiers & Effects Log Page:May Support 00:15:15.673 NVMe-MI Commands & Effects Log Page: May Support 00:15:15.673 Data Area 4 for Telemetry Log: Not Supported 00:15:15.673 Error Log Page Entries Supported: 128 00:15:15.673 Keep Alive: Supported 00:15:15.673 Keep Alive Granularity: 10000 ms 00:15:15.673 00:15:15.673 NVM Command Set Attributes 00:15:15.673 ========================== 00:15:15.673 Submission Queue Entry Size 00:15:15.673 Max: 64 00:15:15.673 Min: 64 00:15:15.673 Completion Queue Entry Size 00:15:15.674 Max: 16 00:15:15.674 Min: 16 00:15:15.674 Number of Namespaces: 32 00:15:15.674 Compare Command: Supported 00:15:15.674 Write Uncorrectable Command: Not Supported 00:15:15.674 Dataset Management Command: Supported 00:15:15.674 Write Zeroes Command: Supported 00:15:15.674 Set Features Save Field: Not Supported 00:15:15.674 Reservations: Not Supported 00:15:15.674 Timestamp: Not Supported 00:15:15.674 Copy: Supported 00:15:15.674 Volatile Write Cache: Present 00:15:15.674 Atomic Write Unit (Normal): 1 00:15:15.674 Atomic Write Unit (PFail): 1 00:15:15.674 Atomic Compare & Write Unit: 1 00:15:15.674 Fused Compare & Write: Supported 00:15:15.674 Scatter-Gather List 00:15:15.674 SGL Command Set: Supported (Dword aligned) 00:15:15.674 SGL Keyed: Not Supported 00:15:15.674 SGL Bit Bucket Descriptor: Not Supported 00:15:15.674 SGL Metadata Pointer: Not Supported 00:15:15.674 Oversized SGL: Not Supported 00:15:15.674 SGL Metadata Address: Not Supported 00:15:15.674 SGL Offset: Not Supported 00:15:15.674 Transport SGL Data Block: Not Supported 00:15:15.674 Replay Protected Memory Block: Not Supported 00:15:15.674 00:15:15.674 Firmware Slot Information 00:15:15.674 ========================= 00:15:15.674 Active slot: 1 00:15:15.674 Slot 1 Firmware Revision: 24.05.1 00:15:15.674 00:15:15.674 00:15:15.674 Commands Supported and Effects 00:15:15.674 ============================== 00:15:15.674 Admin Commands 00:15:15.674 -------------- 00:15:15.674 Get Log Page (02h): Supported 00:15:15.674 Identify (06h): Supported 00:15:15.674 Abort (08h): Supported 00:15:15.674 Set Features (09h): Supported 00:15:15.674 Get Features (0Ah): Supported 00:15:15.674 Asynchronous Event Request (0Ch): Supported 00:15:15.674 Keep Alive (18h): Supported 00:15:15.674 I/O Commands 00:15:15.674 ------------ 00:15:15.674 Flush (00h): Supported LBA-Change 00:15:15.674 Write (01h): Supported LBA-Change 00:15:15.674 Read (02h): Supported 00:15:15.674 Compare (05h): Supported 00:15:15.674 Write Zeroes (08h): Supported LBA-Change 00:15:15.674 Dataset Management (09h): Supported LBA-Change 00:15:15.674 Copy (19h): Supported LBA-Change 00:15:15.674 Unknown (79h): Supported LBA-Change 00:15:15.674 Unknown (7Ah): Supported 00:15:15.674 00:15:15.674 Error Log 00:15:15.674 ========= 00:15:15.674 00:15:15.674 Arbitration 00:15:15.674 =========== 00:15:15.674 Arbitration Burst: 1 00:15:15.674 00:15:15.674 Power Management 00:15:15.674 ================ 00:15:15.674 Number of Power States: 1 00:15:15.674 Current Power State: Power State #0 00:15:15.674 Power State #0: 00:15:15.674 Max Power: 0.00 W 00:15:15.674 Non-Operational State: Operational 00:15:15.674 Entry Latency: Not Reported 00:15:15.674 Exit Latency: Not Reported 00:15:15.674 Relative Read Throughput: 0 00:15:15.674 Relative Read Latency: 0 00:15:15.674 Relative Write Throughput: 0 00:15:15.674 Relative Write Latency: 0 00:15:15.674 Idle Power: Not Reported 00:15:15.674 Active Power: Not Reported 00:15:15.674 Non-Operational Permissive Mode: Not Supported 00:15:15.674 00:15:15.674 Health Information 00:15:15.674 ================== 00:15:15.674 Critical Warnings: 00:15:15.674 Available Spare Space: OK 00:15:15.674 Temperature: OK 00:15:15.674 Device Reliability: OK 00:15:15.674 Read Only: No 00:15:15.674 Volatile Memory Backup: OK 00:15:15.674 Current Temperature: 0 Kelvin[2024-07-14 04:31:35.692076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:15.674 [2024-07-14 04:31:35.699881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:15.674 [2024-07-14 04:31:35.699924] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:15.674 [2024-07-14 04:31:35.699942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.674 [2024-07-14 04:31:35.699953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.674 [2024-07-14 04:31:35.699964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.674 [2024-07-14 04:31:35.699979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.674 [2024-07-14 04:31:35.700067] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:15.674 [2024-07-14 04:31:35.700088] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:15.674 [2024-07-14 04:31:35.701065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.674 [2024-07-14 04:31:35.701135] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:15.674 [2024-07-14 04:31:35.701150] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:15.674 [2024-07-14 04:31:35.702080] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:15.674 [2024-07-14 04:31:35.702105] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:15.674 [2024-07-14 04:31:35.702155] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:15.674 [2024-07-14 04:31:35.703383] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:15.674 (-273 Celsius) 00:15:15.674 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:15.674 Available Spare: 0% 00:15:15.674 Available Spare Threshold: 0% 00:15:15.674 Life Percentage Used: 0% 00:15:15.674 Data Units Read: 0 00:15:15.674 Data Units Written: 0 00:15:15.674 Host Read Commands: 0 00:15:15.674 Host Write Commands: 0 00:15:15.674 Controller Busy Time: 0 minutes 00:15:15.674 Power Cycles: 0 00:15:15.674 Power On Hours: 0 hours 00:15:15.674 Unsafe Shutdowns: 0 00:15:15.674 Unrecoverable Media Errors: 0 00:15:15.674 Lifetime Error Log Entries: 0 00:15:15.674 Warning Temperature Time: 0 minutes 00:15:15.674 Critical Temperature Time: 0 minutes 00:15:15.674 00:15:15.674 Number of Queues 00:15:15.674 ================ 00:15:15.674 Number of I/O Submission Queues: 127 00:15:15.674 Number of I/O Completion Queues: 127 00:15:15.674 00:15:15.674 Active Namespaces 00:15:15.674 ================= 00:15:15.674 Namespace ID:1 00:15:15.674 Error Recovery Timeout: Unlimited 00:15:15.674 Command Set Identifier: NVM (00h) 00:15:15.674 Deallocate: Supported 00:15:15.674 Deallocated/Unwritten Error: Not Supported 00:15:15.674 Deallocated Read Value: Unknown 00:15:15.674 Deallocate in Write Zeroes: Not Supported 00:15:15.674 Deallocated Guard Field: 0xFFFF 00:15:15.674 Flush: Supported 00:15:15.674 Reservation: Supported 00:15:15.674 Namespace Sharing Capabilities: Multiple Controllers 00:15:15.674 Size (in LBAs): 131072 (0GiB) 00:15:15.674 Capacity (in LBAs): 131072 (0GiB) 00:15:15.674 Utilization (in LBAs): 131072 (0GiB) 00:15:15.674 NGUID: 3E8E951425A74A478077D5D8BC590241 00:15:15.674 UUID: 3e8e9514-25a7-4a47-8077-d5d8bc590241 00:15:15.674 Thin Provisioning: Not Supported 00:15:15.674 Per-NS Atomic Units: Yes 00:15:15.674 Atomic Boundary Size (Normal): 0 00:15:15.674 Atomic Boundary Size (PFail): 0 00:15:15.674 Atomic Boundary Offset: 0 00:15:15.674 Maximum Single Source Range Length: 65535 00:15:15.674 Maximum Copy Length: 65535 00:15:15.674 Maximum Source Range Count: 1 00:15:15.674 NGUID/EUI64 Never Reused: No 00:15:15.674 Namespace Write Protected: No 00:15:15.674 Number of LBA Formats: 1 00:15:15.674 Current LBA Format: LBA Format #00 00:15:15.674 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:15.674 00:15:15.674 04:31:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:15.674 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.933 [2024-07-14 04:31:35.935051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.212 Initializing NVMe Controllers 00:15:21.212 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.212 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:21.212 Initialization complete. Launching workers. 00:15:21.212 ======================================================== 00:15:21.212 Latency(us) 00:15:21.212 Device Information : IOPS MiB/s Average min max 00:15:21.212 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35921.72 140.32 3562.89 1164.15 7269.56 00:15:21.212 ======================================================== 00:15:21.212 Total : 35921.72 140.32 3562.89 1164.15 7269.56 00:15:21.212 00:15:21.212 [2024-07-14 04:31:41.044256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.212 04:31:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:21.212 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.212 [2024-07-14 04:31:41.286939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.483 Initializing NVMe Controllers 00:15:26.483 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.483 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:26.483 Initialization complete. Launching workers. 00:15:26.483 ======================================================== 00:15:26.483 Latency(us) 00:15:26.483 Device Information : IOPS MiB/s Average min max 00:15:26.483 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34015.05 132.87 3762.44 1165.31 7457.63 00:15:26.483 ======================================================== 00:15:26.483 Total : 34015.05 132.87 3762.44 1165.31 7457.63 00:15:26.483 00:15:26.483 [2024-07-14 04:31:46.310930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.483 04:31:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:26.483 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.483 [2024-07-14 04:31:46.520769] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.778 [2024-07-14 04:31:51.657026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.778 Initializing NVMe Controllers 00:15:31.778 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.778 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.778 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:31.778 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:31.778 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:31.778 Initialization complete. Launching workers. 00:15:31.778 Starting thread on core 2 00:15:31.778 Starting thread on core 3 00:15:31.778 Starting thread on core 1 00:15:31.778 04:31:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:31.778 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.778 [2024-07-14 04:31:51.962705] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.066 [2024-07-14 04:31:55.049779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.066 Initializing NVMe Controllers 00:15:35.066 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.066 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.066 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:35.066 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:35.066 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:35.066 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:35.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:35.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:35.066 Initialization complete. Launching workers. 00:15:35.066 Starting thread on core 1 with urgent priority queue 00:15:35.066 Starting thread on core 2 with urgent priority queue 00:15:35.066 Starting thread on core 3 with urgent priority queue 00:15:35.066 Starting thread on core 0 with urgent priority queue 00:15:35.066 SPDK bdev Controller (SPDK2 ) core 0: 5305.33 IO/s 18.85 secs/100000 ios 00:15:35.066 SPDK bdev Controller (SPDK2 ) core 1: 6002.33 IO/s 16.66 secs/100000 ios 00:15:35.066 SPDK bdev Controller (SPDK2 ) core 2: 4322.00 IO/s 23.14 secs/100000 ios 00:15:35.066 SPDK bdev Controller (SPDK2 ) core 3: 5956.00 IO/s 16.79 secs/100000 ios 00:15:35.066 ======================================================== 00:15:35.066 00:15:35.066 04:31:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:35.066 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.324 [2024-07-14 04:31:55.350360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.324 Initializing NVMe Controllers 00:15:35.324 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.324 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.324 Namespace ID: 1 size: 0GB 00:15:35.324 Initialization complete. 00:15:35.324 INFO: using host memory buffer for IO 00:15:35.324 Hello world! 00:15:35.324 [2024-07-14 04:31:55.359414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.324 04:31:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:35.324 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.582 [2024-07-14 04:31:55.635268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.964 Initializing NVMe Controllers 00:15:36.964 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.964 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.964 Initialization complete. Launching workers. 00:15:36.964 submit (in ns) avg, min, max = 7580.9, 3537.8, 6994258.9 00:15:36.964 complete (in ns) avg, min, max = 27498.4, 2064.4, 6993171.1 00:15:36.964 00:15:36.964 Submit histogram 00:15:36.964 ================ 00:15:36.964 Range in us Cumulative Count 00:15:36.964 3.532 - 3.556: 0.1208% ( 16) 00:15:36.964 3.556 - 3.579: 0.7245% ( 80) 00:15:36.964 3.579 - 3.603: 2.4981% ( 235) 00:15:36.964 3.603 - 3.627: 5.6528% ( 418) 00:15:36.964 3.627 - 3.650: 11.8868% ( 826) 00:15:36.964 3.650 - 3.674: 19.1019% ( 956) 00:15:36.964 3.674 - 3.698: 28.0906% ( 1191) 00:15:36.964 3.698 - 3.721: 36.6113% ( 1129) 00:15:36.964 3.721 - 3.745: 44.4830% ( 1043) 00:15:36.964 3.745 - 3.769: 50.5585% ( 805) 00:15:36.964 3.769 - 3.793: 55.5019% ( 655) 00:15:36.964 3.793 - 3.816: 59.9623% ( 591) 00:15:36.964 3.816 - 3.840: 63.1245% ( 419) 00:15:36.964 3.840 - 3.864: 67.0868% ( 525) 00:15:36.964 3.864 - 3.887: 70.2717% ( 422) 00:15:36.964 3.887 - 3.911: 74.0151% ( 496) 00:15:36.964 3.911 - 3.935: 78.5358% ( 599) 00:15:36.964 3.935 - 3.959: 82.2038% ( 486) 00:15:36.964 3.959 - 3.982: 85.2604% ( 405) 00:15:36.964 3.982 - 4.006: 87.2755% ( 267) 00:15:36.964 4.006 - 4.030: 88.8226% ( 205) 00:15:36.964 4.030 - 4.053: 90.2792% ( 193) 00:15:36.964 4.053 - 4.077: 91.4792% ( 159) 00:15:36.964 4.077 - 4.101: 92.3396% ( 114) 00:15:36.964 4.101 - 4.124: 93.2528% ( 121) 00:15:36.964 4.124 - 4.148: 94.1208% ( 115) 00:15:36.964 4.148 - 4.172: 94.7245% ( 80) 00:15:36.964 4.172 - 4.196: 95.1774% ( 60) 00:15:36.964 4.196 - 4.219: 95.5094% ( 44) 00:15:36.964 4.219 - 4.243: 95.7509% ( 32) 00:15:36.964 4.243 - 4.267: 95.9245% ( 23) 00:15:36.964 4.267 - 4.290: 96.1434% ( 29) 00:15:36.964 4.290 - 4.314: 96.2717% ( 17) 00:15:36.965 4.314 - 4.338: 96.4226% ( 20) 00:15:36.965 4.338 - 4.361: 96.5283% ( 14) 00:15:36.965 4.361 - 4.385: 96.5962% ( 9) 00:15:36.965 4.385 - 4.409: 96.6491% ( 7) 00:15:36.965 4.409 - 4.433: 96.6943% ( 6) 00:15:36.965 4.433 - 4.456: 96.7396% ( 6) 00:15:36.965 4.456 - 4.480: 96.7623% ( 3) 00:15:36.965 4.480 - 4.504: 96.7849% ( 3) 00:15:36.965 4.504 - 4.527: 96.8000% ( 2) 00:15:36.965 4.527 - 4.551: 96.8075% ( 1) 00:15:36.965 4.575 - 4.599: 96.8302% ( 3) 00:15:36.965 4.599 - 4.622: 96.8528% ( 3) 00:15:36.965 4.646 - 4.670: 96.8604% ( 1) 00:15:36.965 4.693 - 4.717: 96.8679% ( 1) 00:15:36.965 4.717 - 4.741: 96.8830% ( 2) 00:15:36.965 4.741 - 4.764: 96.8981% ( 2) 00:15:36.965 4.764 - 4.788: 96.9208% ( 3) 00:15:36.965 4.788 - 4.812: 96.9585% ( 5) 00:15:36.965 4.812 - 4.836: 97.0038% ( 6) 00:15:36.965 4.836 - 4.859: 97.0189% ( 2) 00:15:36.965 4.859 - 4.883: 97.0792% ( 8) 00:15:36.965 4.883 - 4.907: 97.1321% ( 7) 00:15:36.965 4.907 - 4.930: 97.1698% ( 5) 00:15:36.965 4.930 - 4.954: 97.2302% ( 8) 00:15:36.965 4.954 - 4.978: 97.2981% ( 9) 00:15:36.965 4.978 - 5.001: 97.3585% ( 8) 00:15:36.965 5.001 - 5.025: 97.4264% ( 9) 00:15:36.965 5.025 - 5.049: 97.4491% ( 3) 00:15:36.965 5.049 - 5.073: 97.4943% ( 6) 00:15:36.965 5.096 - 5.120: 97.5396% ( 6) 00:15:36.965 5.120 - 5.144: 97.6226% ( 11) 00:15:36.965 5.144 - 5.167: 97.6528% ( 4) 00:15:36.965 5.167 - 5.191: 97.6906% ( 5) 00:15:36.965 5.191 - 5.215: 97.7208% ( 4) 00:15:36.965 5.215 - 5.239: 97.7358% ( 2) 00:15:36.965 5.239 - 5.262: 97.7585% ( 3) 00:15:36.965 5.262 - 5.286: 97.7736% ( 2) 00:15:36.965 5.310 - 5.333: 97.7887% ( 2) 00:15:36.965 5.333 - 5.357: 97.8038% ( 2) 00:15:36.965 5.357 - 5.381: 97.8113% ( 1) 00:15:36.965 5.381 - 5.404: 97.8415% ( 4) 00:15:36.965 5.404 - 5.428: 97.8717% ( 4) 00:15:36.965 5.428 - 5.452: 97.8868% ( 2) 00:15:36.965 5.499 - 5.523: 97.9170% ( 4) 00:15:36.965 5.547 - 5.570: 97.9245% ( 1) 00:15:36.965 5.570 - 5.594: 97.9321% ( 1) 00:15:36.965 5.594 - 5.618: 97.9396% ( 1) 00:15:36.965 5.618 - 5.641: 97.9698% ( 4) 00:15:36.965 5.641 - 5.665: 97.9925% ( 3) 00:15:36.965 5.665 - 5.689: 98.0000% ( 1) 00:15:36.965 5.689 - 5.713: 98.0075% ( 1) 00:15:36.965 5.713 - 5.736: 98.0151% ( 1) 00:15:36.965 5.736 - 5.760: 98.0226% ( 1) 00:15:36.965 5.760 - 5.784: 98.0377% ( 2) 00:15:36.965 5.784 - 5.807: 98.0528% ( 2) 00:15:36.965 5.807 - 5.831: 98.0679% ( 2) 00:15:36.965 5.855 - 5.879: 98.0830% ( 2) 00:15:36.965 5.879 - 5.902: 98.0981% ( 2) 00:15:36.965 5.902 - 5.926: 98.1057% ( 1) 00:15:36.965 5.926 - 5.950: 98.1132% ( 1) 00:15:36.965 5.973 - 5.997: 98.1283% ( 2) 00:15:36.965 5.997 - 6.021: 98.1434% ( 2) 00:15:36.965 6.021 - 6.044: 98.1509% ( 1) 00:15:36.965 6.068 - 6.116: 98.1736% ( 3) 00:15:36.965 6.116 - 6.163: 98.1887% ( 2) 00:15:36.965 6.210 - 6.258: 98.1962% ( 1) 00:15:36.965 6.258 - 6.305: 98.2189% ( 3) 00:15:36.965 6.305 - 6.353: 98.2340% ( 2) 00:15:36.965 6.353 - 6.400: 98.2415% ( 1) 00:15:36.965 6.495 - 6.542: 98.2491% ( 1) 00:15:36.965 6.684 - 6.732: 98.2566% ( 1) 00:15:36.965 6.874 - 6.921: 98.2642% ( 1) 00:15:36.965 7.159 - 7.206: 98.2792% ( 2) 00:15:36.965 7.253 - 7.301: 98.2868% ( 1) 00:15:36.965 7.348 - 7.396: 98.2943% ( 1) 00:15:36.965 7.396 - 7.443: 98.3094% ( 2) 00:15:36.965 7.443 - 7.490: 98.3245% ( 2) 00:15:36.965 7.490 - 7.538: 98.3396% ( 2) 00:15:36.965 7.538 - 7.585: 98.3472% ( 1) 00:15:36.965 7.585 - 7.633: 98.3547% ( 1) 00:15:36.965 7.633 - 7.680: 98.3698% ( 2) 00:15:36.965 7.680 - 7.727: 98.3849% ( 2) 00:15:36.965 7.775 - 7.822: 98.4000% ( 2) 00:15:36.965 7.870 - 7.917: 98.4075% ( 1) 00:15:36.965 7.917 - 7.964: 98.4151% ( 1) 00:15:36.965 7.964 - 8.012: 98.4226% ( 1) 00:15:36.965 8.059 - 8.107: 98.4377% ( 2) 00:15:36.965 8.107 - 8.154: 98.4453% ( 1) 00:15:36.965 8.154 - 8.201: 98.4528% ( 1) 00:15:36.965 8.201 - 8.249: 98.4604% ( 1) 00:15:36.965 8.344 - 8.391: 98.4679% ( 1) 00:15:36.965 8.391 - 8.439: 98.4830% ( 2) 00:15:36.965 8.439 - 8.486: 98.4906% ( 1) 00:15:36.965 8.486 - 8.533: 98.5057% ( 2) 00:15:36.965 8.533 - 8.581: 98.5208% ( 2) 00:15:36.965 8.581 - 8.628: 98.5358% ( 2) 00:15:36.965 8.628 - 8.676: 98.5509% ( 2) 00:15:36.965 8.676 - 8.723: 98.5585% ( 1) 00:15:36.965 9.007 - 9.055: 98.5660% ( 1) 00:15:36.965 9.055 - 9.102: 98.5736% ( 1) 00:15:36.965 9.102 - 9.150: 98.5811% ( 1) 00:15:36.965 9.197 - 9.244: 98.5887% ( 1) 00:15:36.965 9.244 - 9.292: 98.5962% ( 1) 00:15:36.965 9.481 - 9.529: 98.6038% ( 1) 00:15:36.965 9.576 - 9.624: 98.6113% ( 1) 00:15:36.965 9.624 - 9.671: 98.6189% ( 1) 00:15:36.965 9.671 - 9.719: 98.6264% ( 1) 00:15:36.965 9.766 - 9.813: 98.6340% ( 1) 00:15:36.965 9.861 - 9.908: 98.6415% ( 1) 00:15:36.965 9.956 - 10.003: 98.6491% ( 1) 00:15:36.965 10.098 - 10.145: 98.6566% ( 1) 00:15:36.965 10.145 - 10.193: 98.6717% ( 2) 00:15:36.965 10.430 - 10.477: 98.6792% ( 1) 00:15:36.965 10.477 - 10.524: 98.6868% ( 1) 00:15:36.965 10.809 - 10.856: 98.6943% ( 1) 00:15:36.965 10.856 - 10.904: 98.7019% ( 1) 00:15:36.965 10.904 - 10.951: 98.7170% ( 2) 00:15:36.965 10.951 - 10.999: 98.7245% ( 1) 00:15:36.965 10.999 - 11.046: 98.7321% ( 1) 00:15:36.965 11.046 - 11.093: 98.7396% ( 1) 00:15:36.965 11.093 - 11.141: 98.7472% ( 1) 00:15:36.965 11.188 - 11.236: 98.7547% ( 1) 00:15:36.965 11.425 - 11.473: 98.7623% ( 1) 00:15:36.965 11.520 - 11.567: 98.7774% ( 2) 00:15:36.965 11.947 - 11.994: 98.7849% ( 1) 00:15:36.965 12.041 - 12.089: 98.8000% ( 2) 00:15:36.965 12.089 - 12.136: 98.8075% ( 1) 00:15:36.965 12.136 - 12.231: 98.8151% ( 1) 00:15:36.965 12.231 - 12.326: 98.8226% ( 1) 00:15:36.965 12.421 - 12.516: 98.8302% ( 1) 00:15:36.965 12.516 - 12.610: 98.8453% ( 2) 00:15:36.965 12.990 - 13.084: 98.8679% ( 3) 00:15:36.965 13.084 - 13.179: 98.8755% ( 1) 00:15:36.965 13.179 - 13.274: 98.8830% ( 1) 00:15:36.965 13.274 - 13.369: 98.8981% ( 2) 00:15:36.965 13.559 - 13.653: 98.9057% ( 1) 00:15:36.965 13.843 - 13.938: 98.9132% ( 1) 00:15:36.965 13.938 - 14.033: 98.9208% ( 1) 00:15:36.965 14.127 - 14.222: 98.9283% ( 1) 00:15:36.965 14.317 - 14.412: 98.9358% ( 1) 00:15:36.965 14.601 - 14.696: 98.9434% ( 1) 00:15:36.965 14.696 - 14.791: 98.9509% ( 1) 00:15:36.965 15.360 - 15.455: 98.9585% ( 1) 00:15:36.965 17.067 - 17.161: 98.9660% ( 1) 00:15:36.965 17.161 - 17.256: 98.9811% ( 2) 00:15:36.965 17.256 - 17.351: 99.0189% ( 5) 00:15:36.965 17.446 - 17.541: 99.0566% ( 5) 00:15:36.965 17.541 - 17.636: 99.0792% ( 3) 00:15:36.965 17.636 - 17.730: 99.1094% ( 4) 00:15:36.965 17.730 - 17.825: 99.1396% ( 4) 00:15:36.965 17.825 - 17.920: 99.1849% ( 6) 00:15:36.965 17.920 - 18.015: 99.2604% ( 10) 00:15:36.965 18.015 - 18.110: 99.3057% ( 6) 00:15:36.965 18.110 - 18.204: 99.3434% ( 5) 00:15:36.965 18.204 - 18.299: 99.3811% ( 5) 00:15:36.965 18.299 - 18.394: 99.4491% ( 9) 00:15:36.965 18.394 - 18.489: 99.5321% ( 11) 00:15:36.965 18.489 - 18.584: 99.6000% ( 9) 00:15:36.965 18.584 - 18.679: 99.6830% ( 11) 00:15:36.965 18.679 - 18.773: 99.7208% ( 5) 00:15:36.965 18.773 - 18.868: 99.7509% ( 4) 00:15:36.965 18.868 - 18.963: 99.7811% ( 4) 00:15:36.965 18.963 - 19.058: 99.7962% ( 2) 00:15:36.965 19.058 - 19.153: 99.8189% ( 3) 00:15:36.965 19.153 - 19.247: 99.8264% ( 1) 00:15:36.965 19.342 - 19.437: 99.8340% ( 1) 00:15:36.965 19.627 - 19.721: 99.8491% ( 2) 00:15:36.965 19.816 - 19.911: 99.8642% ( 2) 00:15:36.965 19.911 - 20.006: 99.8717% ( 1) 00:15:36.965 21.807 - 21.902: 99.8792% ( 1) 00:15:36.965 22.945 - 23.040: 99.8868% ( 1) 00:15:36.965 23.135 - 23.230: 99.8943% ( 1) 00:15:36.965 23.893 - 23.988: 99.9019% ( 1) 00:15:36.965 27.496 - 27.686: 99.9094% ( 1) 00:15:36.965 385.327 - 386.844: 99.9170% ( 1) 00:15:36.965 3980.705 - 4004.978: 99.9774% ( 8) 00:15:36.965 4004.978 - 4029.250: 99.9925% ( 2) 00:15:36.965 6990.507 - 7039.052: 100.0000% ( 1) 00:15:36.965 00:15:36.965 Complete histogram 00:15:36.965 ================== 00:15:36.965 Range in us Cumulative Count 00:15:36.965 2.062 - 2.074: 4.1434% ( 549) 00:15:36.965 2.074 - 2.086: 31.8189% ( 3667) 00:15:36.965 2.086 - 2.098: 35.6830% ( 512) 00:15:36.965 2.098 - 2.110: 43.4792% ( 1033) 00:15:36.965 2.110 - 2.121: 56.3396% ( 1704) 00:15:36.965 2.121 - 2.133: 58.3774% ( 270) 00:15:36.965 2.133 - 2.145: 65.1849% ( 902) 00:15:36.965 2.145 - 2.157: 73.0038% ( 1036) 00:15:36.965 2.157 - 2.169: 74.4075% ( 186) 00:15:36.965 2.169 - 2.181: 78.3925% ( 528) 00:15:36.965 2.181 - 2.193: 81.9170% ( 467) 00:15:36.965 2.193 - 2.204: 82.5736% ( 87) 00:15:36.965 2.204 - 2.216: 84.7698% ( 291) 00:15:36.965 2.216 - 2.228: 87.9925% ( 427) 00:15:36.965 2.228 - 2.240: 90.1585% ( 287) 00:15:36.965 2.240 - 2.252: 91.6377% ( 196) 00:15:36.965 2.252 - 2.264: 93.2453% ( 213) 00:15:36.965 2.264 - 2.276: 93.6528% ( 54) 00:15:36.965 2.276 - 2.287: 94.0679% ( 55) 00:15:36.965 2.287 - 2.299: 94.4075% ( 45) 00:15:36.966 2.299 - 2.311: 95.0491% ( 85) 00:15:36.966 2.311 - 2.323: 95.3811% ( 44) 00:15:36.966 2.323 - 2.335: 95.4566% ( 10) 00:15:36.966 2.335 - 2.347: 95.5094% ( 7) 00:15:36.966 2.347 - 2.359: 95.5321% ( 3) 00:15:36.966 2.359 - 2.370: 95.6453% ( 15) 00:15:36.966 2.370 - 2.382: 95.8943% ( 33) 00:15:36.966 2.382 - 2.394: 96.2415% ( 46) 00:15:36.966 2.394 - 2.406: 96.5962% ( 47) 00:15:36.966 2.406 - 2.418: 96.7094% ( 15) 00:15:36.966 2.418 - 2.430: 96.9132% ( 27) 00:15:36.966 2.430 - 2.441: 97.0415% ( 17) 00:15:36.966 2.441 - 2.453: 97.1774% ( 18) 00:15:36.966 2.453 - 2.465: 97.3434% ( 22) 00:15:36.966 2.465 - 2.477: 97.5170% ( 23) 00:15:36.966 2.477 - 2.489: 97.6151% ( 13) 00:15:36.966 2.489 - 2.501: 97.6830% ( 9) 00:15:36.966 2.501 - 2.513: 97.7585% ( 10) 00:15:36.966 2.513 - 2.524: 97.8491% ( 12) 00:15:36.966 2.524 - 2.536: 97.9094% ( 8) 00:15:36.966 2.536 - 2.548: 97.9698% ( 8) 00:15:36.966 2.548 - 2.560: 98.0075% ( 5) 00:15:36.966 2.560 - 2.572: 98.0302% ( 3) 00:15:36.966 2.572 - 2.584: 98.0528% ( 3) 00:15:36.966 2.584 - 2.596: 98.0604% ( 1) 00:15:36.966 2.596 - 2.607: 98.0679% ( 1) 00:15:36.966 2.607 - 2.619: 98.1057% ( 5) 00:15:36.966 2.619 - 2.631: 98.1132% ( 1) 00:15:36.966 2.631 - 2.643: 98.1283% ( 2) 00:15:36.966 2.655 - 2.667: 98.1358% ( 1) 00:15:36.966 2.702 - 2.714: 98.1434% ( 1) 00:15:36.966 2.714 - 2.726: 98.1660% ( 3) 00:15:36.966 2.726 - 2.738: 98.1811% ( 2) 00:15:36.966 2.738 - 2.750: 98.1962% ( 2) 00:15:36.966 2.750 - 2.761: 98.2038% ( 1) 00:15:36.966 2.773 - 2.785: 98.2113% ( 1) 00:15:36.966 2.797 - 2.809: 98.2189% ( 1) 00:15:36.966 2.821 - 2.833: 98.2264% ( 1) 00:15:36.966 2.833 - 2.844: 98.2415% ( 2) 00:15:36.966 2.916 - 2.927: 98.2491% ( 1) 00:15:36.966 2.927 - 2.939: 98.2642% ( 2) 00:15:36.966 2.939 - 2.951: 98.2717% ( 1) 00:15:36.966 2.951 - 2.963: 98.2792% ( 1) 00:15:36.966 2.963 - 2.975: 98.2868% ( 1) 00:15:36.966 2.999 - 3.010: 98.3019% ( 2) 00:15:36.966 3.022 - 3.034: 98.3094% ( 1) 00:15:36.966 3.034 - 3.058: 98.3170% ( 1) 00:15:36.966 3.058 - 3.081: 98.3321% ( 2) 00:15:36.966 3.105 - 3.129: 98.3396% ( 1) 00:15:36.966 3.153 - 3.176: 98.3547% ( 2) 00:15:36.966 3.176 - 3.200: 98.3623% ( 1) 00:15:36.966 3.247 - 3.271: 98.3698% ( 1) 00:15:36.966 3.271 - 3.295: 98.3774% ( 1) 00:15:36.966 3.319 - 3.342: 98.3849% ( 1) 00:15:36.966 3.342 - 3.366: 98.4000% ( 2) 00:15:36.966 3.366 - 3.390: 98.4226% ( 3) 00:15:36.966 3.390 - 3.413: 98.4453% ( 3) 00:15:36.966 3.413 - 3.437: 98.4679% ( 3) 00:15:36.966 3.461 - 3.484: 98.4830% ( 2) 00:15:36.966 3.484 - 3.508: 98.4906% ( 1) 00:15:36.966 3.508 - 3.532: 98.5057% ( 2) 00:15:36.966 3.532 - 3.556: 98.5208% ( 2) 00:15:36.966 3.556 - 3.579: 98.5509% ( 4) 00:15:36.966 3.579 - 3.603: 98.5660% ( 2) 00:15:36.966 3.603 - 3.627: 98.5811% ( 2) 00:15:36.966 3.627 - 3.650: 98.5887% ( 1) 00:15:36.966 3.650 - 3.674: 98.6189% ( 4) 00:15:36.966 3.698 - 3.721: 98.6340% ( 2) 00:15:36.966 3.721 - 3.745: 98.6415% ( 1) 00:15:36.966 3.745 - 3.769: 98.6491% ( 1) 00:15:36.966 3.769 - 3.793: 98.6566% ( 1) 00:15:36.966 3.793 - 3.816: 98.6717% ( 2) 00:15:36.966 3.816 - 3.840: 98.6868% ( 2) 00:15:36.966 3.840 - 3.864: 98.7170% ( 4) 00:15:36.966 3.911 - 3.935: 98.7245% ( 1) 00:15:36.966 3.935 - 3.959: 98.7321% ( 1) 00:15:36.966 3.982 - 4.006: 98.7472% ( 2) 00:15:36.966 4.006 - 4.030: 98.7547% ( 1) 00:15:36.966 4.077 - 4.101: 98.7623% ( 1) 00:15:36.966 4.101 - 4.124: 98.7698% ( 1) 00:15:36.966 4.124 - 4.148: 98.7774% ( 1) 00:15:36.966 5.120 - 5.144: 98.7849% ( 1) 00:15:36.966 5.499 - 5.523: 98.8000% ( 2) 00:15:36.966 5.879 - 5.902: 98.8075% ( 1) 00:15:36.966 6.258 - 6.305: 98.8151% ( 1) 00:15:36.966 6.542 - 6.590: 98.8226% ( 1) 00:15:36.966 6.590 - 6.637: 98.8377% ( 2) 00:15:36.966 6.779 - 6.827: 98.8453% ( 1) 00:15:36.966 6.827 - 6.874: 98.8528% ( 1) 00:15:36.966 6.874 - 6.921: 98.8604% ( 1) 00:15:36.966 6.969 - 7.016: 98.8755% ( 2) 00:15:36.966 7.206 - 7.253: 98.8830% ( 1) 00:15:36.966 7.443 - 7.490: 98.8906% ( 1) 00:15:36.966 7.585 - 7.633: 98.8981% ( 1) 00:15:36.966 8.201 - 8.249: 98.9057% ( 1) 00:15:36.966 9.102 - 9.150: 98.9132% ( 1) 00:15:36.966 9.813 - 9.861: 98.9208% ( 1) 00:15:36.966 10.335 - 10.382: 98.9283% ( 1) 00:15:36.966 11.947 - 11.994: 98.9358% ( 1) 00:15:36.966 12.231 - 12.326: 98.9434% ( 1) 00:15:36.966 12.326 - 12.421: 98.9509% ( 1) 00:15:36.966 15.550 - 15.644: 98.9585% ( 1) 00:15:36.966 15.644 - 15.739: 98.9887% ( 4) 00:15:36.966 15.739 - 15.834: 99.0113% ( 3) 00:15:36.966 15.834 - 15.929: 99.0264% ( 2) 00:15:36.966 15.929 - 16.024: 99.0491% ( 3) 00:15:36.966 16.024 - 16.119: 99.0566% ( 1) 00:15:36.966 16.119 - 16.213: 99.0642%[2024-07-14 04:31:56.739715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.966 ( 1) 00:15:36.966 16.213 - 16.308: 99.0792% ( 2) 00:15:36.966 16.308 - 16.403: 99.1094% ( 4) 00:15:36.966 16.403 - 16.498: 99.1321% ( 3) 00:15:36.966 16.498 - 16.593: 99.1623% ( 4) 00:15:36.966 16.687 - 16.782: 99.1774% ( 2) 00:15:36.966 16.782 - 16.877: 99.2000% ( 3) 00:15:36.966 16.877 - 16.972: 99.2075% ( 1) 00:15:36.966 16.972 - 17.067: 99.2377% ( 4) 00:15:36.966 17.067 - 17.161: 99.2604% ( 3) 00:15:36.966 17.161 - 17.256: 99.2906% ( 4) 00:15:36.966 17.256 - 17.351: 99.3057% ( 2) 00:15:36.966 17.351 - 17.446: 99.3132% ( 1) 00:15:36.966 17.541 - 17.636: 99.3208% ( 1) 00:15:36.966 17.636 - 17.730: 99.3283% ( 1) 00:15:36.966 17.730 - 17.825: 99.3434% ( 2) 00:15:36.966 18.394 - 18.489: 99.3509% ( 1) 00:15:36.966 18.584 - 18.679: 99.3585% ( 1) 00:15:36.966 19.721 - 19.816: 99.3660% ( 1) 00:15:36.966 1031.585 - 1037.653: 99.3736% ( 1) 00:15:36.966 1601.991 - 1614.127: 99.3811% ( 1) 00:15:36.966 3228.255 - 3252.527: 99.3887% ( 1) 00:15:36.966 3980.705 - 4004.978: 99.8566% ( 62) 00:15:36.966 4004.978 - 4029.250: 99.9849% ( 17) 00:15:36.966 5995.330 - 6019.603: 99.9925% ( 1) 00:15:36.966 6990.507 - 7039.052: 100.0000% ( 1) 00:15:36.966 00:15:36.966 04:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:36.966 04:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:36.966 04:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:36.966 04:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:36.966 04:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:36.966 [ 00:15:36.966 { 00:15:36.966 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:36.966 "subtype": "Discovery", 00:15:36.966 "listen_addresses": [], 00:15:36.966 "allow_any_host": true, 00:15:36.966 "hosts": [] 00:15:36.966 }, 00:15:36.966 { 00:15:36.966 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:36.966 "subtype": "NVMe", 00:15:36.966 "listen_addresses": [ 00:15:36.966 { 00:15:36.966 "trtype": "VFIOUSER", 00:15:36.966 "adrfam": "IPv4", 00:15:36.966 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:36.966 "trsvcid": "0" 00:15:36.966 } 00:15:36.966 ], 00:15:36.966 "allow_any_host": true, 00:15:36.966 "hosts": [], 00:15:36.966 "serial_number": "SPDK1", 00:15:36.966 "model_number": "SPDK bdev Controller", 00:15:36.966 "max_namespaces": 32, 00:15:36.966 "min_cntlid": 1, 00:15:36.966 "max_cntlid": 65519, 00:15:36.966 "namespaces": [ 00:15:36.966 { 00:15:36.966 "nsid": 1, 00:15:36.966 "bdev_name": "Malloc1", 00:15:36.966 "name": "Malloc1", 00:15:36.966 "nguid": "1FED7E75FEE94DAAAA4C829597A55C96", 00:15:36.966 "uuid": "1fed7e75-fee9-4daa-aa4c-829597a55c96" 00:15:36.966 }, 00:15:36.966 { 00:15:36.966 "nsid": 2, 00:15:36.966 "bdev_name": "Malloc3", 00:15:36.966 "name": "Malloc3", 00:15:36.966 "nguid": "58BC7FD6775D458294FF1DA35DE6986E", 00:15:36.966 "uuid": "58bc7fd6-775d-4582-94ff-1da35de6986e" 00:15:36.966 } 00:15:36.966 ] 00:15:36.966 }, 00:15:36.966 { 00:15:36.966 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:36.966 "subtype": "NVMe", 00:15:36.966 "listen_addresses": [ 00:15:36.966 { 00:15:36.966 "trtype": "VFIOUSER", 00:15:36.966 "adrfam": "IPv4", 00:15:36.966 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:36.966 "trsvcid": "0" 00:15:36.966 } 00:15:36.966 ], 00:15:36.966 "allow_any_host": true, 00:15:36.966 "hosts": [], 00:15:36.966 "serial_number": "SPDK2", 00:15:36.966 "model_number": "SPDK bdev Controller", 00:15:36.966 "max_namespaces": 32, 00:15:36.966 "min_cntlid": 1, 00:15:36.966 "max_cntlid": 65519, 00:15:36.966 "namespaces": [ 00:15:36.966 { 00:15:36.966 "nsid": 1, 00:15:36.966 "bdev_name": "Malloc2", 00:15:36.966 "name": "Malloc2", 00:15:36.966 "nguid": "3E8E951425A74A478077D5D8BC590241", 00:15:36.966 "uuid": "3e8e9514-25a7-4a47-8077-d5d8bc590241" 00:15:36.966 } 00:15:36.966 ] 00:15:36.966 } 00:15:36.966 ] 00:15:36.966 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:36.967 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2757981 00:15:36.967 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:36.967 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:36.967 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:36.967 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:36.967 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:36.967 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:36.967 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:36.967 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:36.967 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.225 [2024-07-14 04:31:57.182447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.225 Malloc4 00:15:37.225 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:37.483 [2024-07-14 04:31:57.545145] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.483 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:37.483 Asynchronous Event Request test 00:15:37.483 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.483 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.483 Registering asynchronous event callbacks... 00:15:37.483 Starting namespace attribute notice tests for all controllers... 00:15:37.483 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:37.483 aer_cb - Changed Namespace 00:15:37.484 Cleaning up... 00:15:37.744 [ 00:15:37.744 { 00:15:37.744 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:37.744 "subtype": "Discovery", 00:15:37.744 "listen_addresses": [], 00:15:37.744 "allow_any_host": true, 00:15:37.744 "hosts": [] 00:15:37.744 }, 00:15:37.744 { 00:15:37.744 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:37.744 "subtype": "NVMe", 00:15:37.744 "listen_addresses": [ 00:15:37.744 { 00:15:37.744 "trtype": "VFIOUSER", 00:15:37.744 "adrfam": "IPv4", 00:15:37.744 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:37.744 "trsvcid": "0" 00:15:37.744 } 00:15:37.744 ], 00:15:37.744 "allow_any_host": true, 00:15:37.744 "hosts": [], 00:15:37.744 "serial_number": "SPDK1", 00:15:37.744 "model_number": "SPDK bdev Controller", 00:15:37.744 "max_namespaces": 32, 00:15:37.744 "min_cntlid": 1, 00:15:37.744 "max_cntlid": 65519, 00:15:37.744 "namespaces": [ 00:15:37.744 { 00:15:37.744 "nsid": 1, 00:15:37.744 "bdev_name": "Malloc1", 00:15:37.744 "name": "Malloc1", 00:15:37.744 "nguid": "1FED7E75FEE94DAAAA4C829597A55C96", 00:15:37.744 "uuid": "1fed7e75-fee9-4daa-aa4c-829597a55c96" 00:15:37.744 }, 00:15:37.744 { 00:15:37.744 "nsid": 2, 00:15:37.744 "bdev_name": "Malloc3", 00:15:37.744 "name": "Malloc3", 00:15:37.744 "nguid": "58BC7FD6775D458294FF1DA35DE6986E", 00:15:37.744 "uuid": "58bc7fd6-775d-4582-94ff-1da35de6986e" 00:15:37.744 } 00:15:37.744 ] 00:15:37.744 }, 00:15:37.744 { 00:15:37.744 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:37.744 "subtype": "NVMe", 00:15:37.744 "listen_addresses": [ 00:15:37.744 { 00:15:37.744 "trtype": "VFIOUSER", 00:15:37.744 "adrfam": "IPv4", 00:15:37.744 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:37.744 "trsvcid": "0" 00:15:37.744 } 00:15:37.744 ], 00:15:37.744 "allow_any_host": true, 00:15:37.744 "hosts": [], 00:15:37.744 "serial_number": "SPDK2", 00:15:37.744 "model_number": "SPDK bdev Controller", 00:15:37.744 "max_namespaces": 32, 00:15:37.744 "min_cntlid": 1, 00:15:37.744 "max_cntlid": 65519, 00:15:37.744 "namespaces": [ 00:15:37.744 { 00:15:37.744 "nsid": 1, 00:15:37.744 "bdev_name": "Malloc2", 00:15:37.744 "name": "Malloc2", 00:15:37.744 "nguid": "3E8E951425A74A478077D5D8BC590241", 00:15:37.744 "uuid": "3e8e9514-25a7-4a47-8077-d5d8bc590241" 00:15:37.744 }, 00:15:37.744 { 00:15:37.744 "nsid": 2, 00:15:37.744 "bdev_name": "Malloc4", 00:15:37.744 "name": "Malloc4", 00:15:37.744 "nguid": "D4AAA0500D5E41B7BA8977CD78285EDD", 00:15:37.744 "uuid": "d4aaa050-0d5e-41b7-ba89-77cd78285edd" 00:15:37.744 } 00:15:37.744 ] 00:15:37.744 } 00:15:37.744 ] 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2757981 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2752386 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2752386 ']' 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2752386 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2752386 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2752386' 00:15:37.744 killing process with pid 2752386 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2752386 00:15:37.744 04:31:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2752386 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2758119 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2758119' 00:15:38.004 Process pid: 2758119 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2758119 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2758119 ']' 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:38.004 04:31:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.263 04:31:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:38.263 04:31:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:38.263 [2024-07-14 04:31:58.236923] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:38.263 [2024-07-14 04:31:58.237959] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:38.263 [2024-07-14 04:31:58.238013] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.263 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.263 [2024-07-14 04:31:58.298523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.263 [2024-07-14 04:31:58.389908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.263 [2024-07-14 04:31:58.389957] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.263 [2024-07-14 04:31:58.389986] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.263 [2024-07-14 04:31:58.389998] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.263 [2024-07-14 04:31:58.390008] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.263 [2024-07-14 04:31:58.390081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.263 [2024-07-14 04:31:58.390143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.263 [2024-07-14 04:31:58.390220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.263 [2024-07-14 04:31:58.390222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.523 [2024-07-14 04:31:58.489794] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:38.523 [2024-07-14 04:31:58.490048] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:38.523 [2024-07-14 04:31:58.490311] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:38.523 [2024-07-14 04:31:58.490947] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:38.523 [2024-07-14 04:31:58.491183] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:38.523 04:31:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:38.523 04:31:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:38.523 04:31:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:39.462 04:31:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:39.720 04:31:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:39.720 04:31:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:39.720 04:31:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.720 04:31:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:39.720 04:31:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:39.978 Malloc1 00:15:39.978 04:32:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:40.237 04:32:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:40.495 04:32:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:40.753 04:32:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:40.753 04:32:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:40.753 04:32:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:41.011 Malloc2 00:15:41.011 04:32:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:41.270 04:32:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:41.527 04:32:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2758119 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2758119 ']' 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2758119 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2758119 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2758119' 00:15:41.787 killing process with pid 2758119 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2758119 00:15:41.787 04:32:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2758119 00:15:42.046 04:32:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:42.046 04:32:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:42.046 00:15:42.046 real 0m52.753s 00:15:42.046 user 3m28.484s 00:15:42.046 sys 0m4.359s 00:15:42.046 04:32:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:42.046 04:32:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:42.046 ************************************ 00:15:42.046 END TEST nvmf_vfio_user 00:15:42.046 ************************************ 00:15:42.046 04:32:02 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:42.046 04:32:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:42.046 04:32:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:42.046 04:32:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:42.046 ************************************ 00:15:42.046 START TEST nvmf_vfio_user_nvme_compliance 00:15:42.046 ************************************ 00:15:42.046 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:42.306 * Looking for test storage... 00:15:42.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.306 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2758717 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2758717' 00:15:42.307 Process pid: 2758717 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2758717 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 2758717 ']' 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:42.307 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.307 [2024-07-14 04:32:02.331203] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:42.307 [2024-07-14 04:32:02.331300] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.307 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.307 [2024-07-14 04:32:02.395123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:42.307 [2024-07-14 04:32:02.485979] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.307 [2024-07-14 04:32:02.486040] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.307 [2024-07-14 04:32:02.486056] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.307 [2024-07-14 04:32:02.486070] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.307 [2024-07-14 04:32:02.486082] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.307 [2024-07-14 04:32:02.486142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.307 [2024-07-14 04:32:02.486216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.307 [2024-07-14 04:32:02.486218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.566 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:42.566 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:42.566 04:32:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.523 malloc0 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.523 04:32:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:43.781 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.781 00:15:43.781 00:15:43.781 CUnit - A unit testing framework for C - Version 2.1-3 00:15:43.781 http://cunit.sourceforge.net/ 00:15:43.781 00:15:43.781 00:15:43.781 Suite: nvme_compliance 00:15:43.781 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 04:32:03.839450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.781 [2024-07-14 04:32:03.840984] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:43.781 [2024-07-14 04:32:03.841010] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:43.781 [2024-07-14 04:32:03.841024] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:43.782 [2024-07-14 04:32:03.842469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.782 passed 00:15:43.782 Test: admin_identify_ctrlr_verify_fused ...[2024-07-14 04:32:03.927071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.782 [2024-07-14 04:32:03.930091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.782 passed 00:15:44.039 Test: admin_identify_ns ...[2024-07-14 04:32:04.018428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.039 [2024-07-14 04:32:04.078900] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:44.039 [2024-07-14 04:32:04.086900] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:44.039 [2024-07-14 04:32:04.108024] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.039 passed 00:15:44.039 Test: admin_get_features_mandatory_features ...[2024-07-14 04:32:04.192595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.039 [2024-07-14 04:32:04.195619] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.039 passed 00:15:44.297 Test: admin_get_features_optional_features ...[2024-07-14 04:32:04.276140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.297 [2024-07-14 04:32:04.281183] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.297 passed 00:15:44.297 Test: admin_set_features_number_of_queues ...[2024-07-14 04:32:04.362364] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.297 [2024-07-14 04:32:04.470963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.556 passed 00:15:44.556 Test: admin_get_log_page_mandatory_logs ...[2024-07-14 04:32:04.552774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.556 [2024-07-14 04:32:04.555800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.556 passed 00:15:44.556 Test: admin_get_log_page_with_lpo ...[2024-07-14 04:32:04.639962] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.556 [2024-07-14 04:32:04.707884] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:44.556 [2024-07-14 04:32:04.720961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.816 passed 00:15:44.816 Test: fabric_property_get ...[2024-07-14 04:32:04.804727] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.816 [2024-07-14 04:32:04.806018] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:44.816 [2024-07-14 04:32:04.807759] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.816 passed 00:15:44.816 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 04:32:04.892344] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.816 [2024-07-14 04:32:04.893618] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:44.816 [2024-07-14 04:32:04.895367] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.816 passed 00:15:44.816 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 04:32:04.977583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.075 [2024-07-14 04:32:05.060876] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:45.075 [2024-07-14 04:32:05.076879] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:45.075 [2024-07-14 04:32:05.085017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.075 passed 00:15:45.075 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 04:32:05.166933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.075 [2024-07-14 04:32:05.168227] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:45.075 [2024-07-14 04:32:05.169943] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.075 passed 00:15:45.075 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 04:32:05.253420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.335 [2024-07-14 04:32:05.328890] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:45.335 [2024-07-14 04:32:05.352880] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:45.335 [2024-07-14 04:32:05.357967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.335 passed 00:15:45.335 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 04:32:05.440184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.335 [2024-07-14 04:32:05.441440] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:45.335 [2024-07-14 04:32:05.441496] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:45.335 [2024-07-14 04:32:05.443202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.335 passed 00:15:45.594 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 04:32:05.527419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.594 [2024-07-14 04:32:05.619876] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:45.594 [2024-07-14 04:32:05.627889] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:45.594 [2024-07-14 04:32:05.635894] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:45.594 [2024-07-14 04:32:05.643873] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:45.594 [2024-07-14 04:32:05.671985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.594 passed 00:15:45.594 Test: admin_create_io_sq_verify_pc ...[2024-07-14 04:32:05.755731] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.594 [2024-07-14 04:32:05.768889] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:45.852 [2024-07-14 04:32:05.786195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.852 passed 00:15:45.852 Test: admin_create_io_qp_max_qps ...[2024-07-14 04:32:05.873787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.792 [2024-07-14 04:32:06.958883] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:47.364 [2024-07-14 04:32:07.335974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.364 passed 00:15:47.364 Test: admin_create_io_sq_shared_cq ...[2024-07-14 04:32:07.419196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.364 [2024-07-14 04:32:07.550892] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:47.622 [2024-07-14 04:32:07.584963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.622 passed 00:15:47.622 00:15:47.622 Run Summary: Type Total Ran Passed Failed Inactive 00:15:47.623 suites 1 1 n/a 0 0 00:15:47.623 tests 18 18 18 0 0 00:15:47.623 asserts 360 360 360 0 n/a 00:15:47.623 00:15:47.623 Elapsed time = 1.549 seconds 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2758717 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 2758717 ']' 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 2758717 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2758717 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2758717' 00:15:47.623 killing process with pid 2758717 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 2758717 00:15:47.623 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 2758717 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:47.881 00:15:47.881 real 0m5.677s 00:15:47.881 user 0m15.999s 00:15:47.881 sys 0m0.521s 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.881 ************************************ 00:15:47.881 END TEST nvmf_vfio_user_nvme_compliance 00:15:47.881 ************************************ 00:15:47.881 04:32:07 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:47.881 04:32:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:47.881 04:32:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:47.881 04:32:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:47.881 ************************************ 00:15:47.881 START TEST nvmf_vfio_user_fuzz 00:15:47.881 ************************************ 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:47.881 * Looking for test storage... 00:15:47.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.881 04:32:07 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:47.881 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2759439 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2759439' 00:15:47.882 Process pid: 2759439 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2759439 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 2759439 ']' 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:47.882 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.140 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:48.140 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:48.140 04:32:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.517 malloc0 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:49.517 04:32:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:21.607 Fuzzing completed. Shutting down the fuzz application 00:16:21.607 00:16:21.607 Dumping successful admin opcodes: 00:16:21.607 8, 9, 10, 24, 00:16:21.607 Dumping successful io opcodes: 00:16:21.607 0, 00:16:21.607 NS: 0x200003a1ef00 I/O qp, Total commands completed: 651434, total successful commands: 2527, random_seed: 662948032 00:16:21.607 NS: 0x200003a1ef00 admin qp, Total commands completed: 82848, total successful commands: 663, random_seed: 97068736 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2759439 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 2759439 ']' 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 2759439 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2759439 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:21.607 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2759439' 00:16:21.607 killing process with pid 2759439 00:16:21.608 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 2759439 00:16:21.608 04:32:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 2759439 00:16:21.608 04:32:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:21.608 04:32:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:21.608 00:16:21.608 real 0m32.160s 00:16:21.608 user 0m34.150s 00:16:21.608 sys 0m26.062s 00:16:21.608 04:32:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:21.608 04:32:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:21.608 ************************************ 00:16:21.608 END TEST nvmf_vfio_user_fuzz 00:16:21.608 ************************************ 00:16:21.608 04:32:40 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:21.608 04:32:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:21.608 04:32:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:21.608 04:32:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.608 ************************************ 00:16:21.608 START TEST nvmf_host_management 00:16:21.608 ************************************ 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:21.608 * Looking for test storage... 00:16:21.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:21.608 04:32:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.176 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:22.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:22.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:22.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:22.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.177 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:22.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:16:22.436 00:16:22.436 --- 10.0.0.2 ping statistics --- 00:16:22.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.436 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:16:22.436 00:16:22.436 --- 10.0.0.1 ping statistics --- 00:16:22.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.436 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2764753 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2764753 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2764753 ']' 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:22.436 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.436 [2024-07-14 04:32:42.451441] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:22.436 [2024-07-14 04:32:42.451539] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.436 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.436 [2024-07-14 04:32:42.516535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.436 [2024-07-14 04:32:42.605933] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.436 [2024-07-14 04:32:42.605984] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.436 [2024-07-14 04:32:42.606017] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.436 [2024-07-14 04:32:42.606029] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.436 [2024-07-14 04:32:42.606039] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.436 [2024-07-14 04:32:42.606122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.436 [2024-07-14 04:32:42.606180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.436 [2024-07-14 04:32:42.606250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:22.436 [2024-07-14 04:32:42.606252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.695 [2024-07-14 04:32:42.744535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.695 Malloc0 00:16:22.695 [2024-07-14 04:32:42.804130] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2764924 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2764924 /var/tmp/bdevperf.sock 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2764924 ']' 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:22.695 { 00:16:22.695 "params": { 00:16:22.695 "name": "Nvme$subsystem", 00:16:22.695 "trtype": "$TEST_TRANSPORT", 00:16:22.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.695 "adrfam": "ipv4", 00:16:22.695 "trsvcid": "$NVMF_PORT", 00:16:22.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.695 "hdgst": ${hdgst:-false}, 00:16:22.695 "ddgst": ${ddgst:-false} 00:16:22.695 }, 00:16:22.695 "method": "bdev_nvme_attach_controller" 00:16:22.695 } 00:16:22.695 EOF 00:16:22.695 )") 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:22.695 04:32:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:22.695 "params": { 00:16:22.695 "name": "Nvme0", 00:16:22.695 "trtype": "tcp", 00:16:22.695 "traddr": "10.0.0.2", 00:16:22.695 "adrfam": "ipv4", 00:16:22.695 "trsvcid": "4420", 00:16:22.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:22.695 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:22.695 "hdgst": false, 00:16:22.695 "ddgst": false 00:16:22.695 }, 00:16:22.695 "method": "bdev_nvme_attach_controller" 00:16:22.695 }' 00:16:22.695 [2024-07-14 04:32:42.874467] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:22.695 [2024-07-14 04:32:42.874544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764924 ] 00:16:22.955 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.955 [2024-07-14 04:32:42.937023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.955 [2024-07-14 04:32:43.023672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.212 Running I/O for 10 seconds... 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.212 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.469 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:16:23.469 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:16:23.469 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.729 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.729 [2024-07-14 04:32:43.707192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbca980 is same with the state(5) to be set 00:16:23.729 [2024-07-14 04:32:43.707741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.707783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.707812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.707829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.707845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.707859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.707885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.707911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.707927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.707941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.707957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.707972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.707987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.729 [2024-07-14 04:32:43.708842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.729 [2024-07-14 04:32:43.708857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.708878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.708895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.708909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.708924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.708938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.708953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.708967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.708982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.708996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.730 [2024-07-14 04:32:43.709706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.730 [2024-07-14 04:32:43.709792] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a8330 was disconnected and freed. reset controller. 00:16:23.730 [2024-07-14 04:32:43.710932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:23.730 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.730 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:23.730 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.730 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.730 task offset: 62976 on job bdev=Nvme0n1 fails 00:16:23.730 00:16:23.730 Latency(us) 00:16:23.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.730 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:23.730 Job: Nvme0n1 ended in about 0.39 seconds with error 00:16:23.730 Verification LBA range: start 0x0 length 0x400 00:16:23.730 Nvme0n1 : 0.39 1144.06 71.50 163.44 0.00 47602.58 2694.26 41748.86 00:16:23.730 =================================================================================================================== 00:16:23.730 Total : 1144.06 71.50 163.44 0.00 47602.58 2694.26 41748.86 00:16:23.730 [2024-07-14 04:32:43.712821] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:23.730 [2024-07-14 04:32:43.712855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21adf00 (9): Bad file descriptor 00:16:23.730 04:32:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.730 04:32:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:23.730 [2024-07-14 04:32:43.720998] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2764924 00:16:24.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2764924) - No such process 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:24.665 { 00:16:24.665 "params": { 00:16:24.665 "name": "Nvme$subsystem", 00:16:24.665 "trtype": "$TEST_TRANSPORT", 00:16:24.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:24.665 "adrfam": "ipv4", 00:16:24.665 "trsvcid": "$NVMF_PORT", 00:16:24.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:24.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:24.665 "hdgst": ${hdgst:-false}, 00:16:24.665 "ddgst": ${ddgst:-false} 00:16:24.665 }, 00:16:24.665 "method": "bdev_nvme_attach_controller" 00:16:24.665 } 00:16:24.665 EOF 00:16:24.665 )") 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:24.665 04:32:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:24.665 "params": { 00:16:24.665 "name": "Nvme0", 00:16:24.665 "trtype": "tcp", 00:16:24.665 "traddr": "10.0.0.2", 00:16:24.665 "adrfam": "ipv4", 00:16:24.665 "trsvcid": "4420", 00:16:24.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:24.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:24.665 "hdgst": false, 00:16:24.665 "ddgst": false 00:16:24.665 }, 00:16:24.665 "method": "bdev_nvme_attach_controller" 00:16:24.665 }' 00:16:24.665 [2024-07-14 04:32:44.768259] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:24.665 [2024-07-14 04:32:44.768336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765076 ] 00:16:24.665 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.665 [2024-07-14 04:32:44.829233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.924 [2024-07-14 04:32:44.916519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.184 Running I/O for 1 seconds... 00:16:26.122 00:16:26.122 Latency(us) 00:16:26.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.122 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.122 Verification LBA range: start 0x0 length 0x400 00:16:26.122 Nvme0n1 : 1.05 1280.11 80.01 0.00 0.00 49236.52 11845.03 38641.97 00:16:26.122 =================================================================================================================== 00:16:26.122 Total : 1280.11 80.01 0.00 0.00 49236.52 11845.03 38641.97 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:26.384 rmmod nvme_tcp 00:16:26.384 rmmod nvme_fabrics 00:16:26.384 rmmod nvme_keyring 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2764753 ']' 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2764753 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 2764753 ']' 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 2764753 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2764753 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2764753' 00:16:26.384 killing process with pid 2764753 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 2764753 00:16:26.384 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 2764753 00:16:26.640 [2024-07-14 04:32:46.739196] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:26.640 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.640 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.640 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.640 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.640 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.640 04:32:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.640 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.640 04:32:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.182 04:32:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:29.182 04:32:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:29.182 00:16:29.182 real 0m8.657s 00:16:29.182 user 0m19.578s 00:16:29.182 sys 0m2.629s 00:16:29.182 04:32:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:29.182 04:32:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:29.182 ************************************ 00:16:29.182 END TEST nvmf_host_management 00:16:29.182 ************************************ 00:16:29.182 04:32:48 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:29.182 04:32:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:29.182 04:32:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:29.182 04:32:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:29.182 ************************************ 00:16:29.182 START TEST nvmf_lvol 00:16:29.182 ************************************ 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:29.182 * Looking for test storage... 00:16:29.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:29.182 04:32:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:31.127 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:31.127 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:31.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.127 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:31.128 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:31.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:16:31.128 00:16:31.128 --- 10.0.0.2 ping statistics --- 00:16:31.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.128 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:16:31.128 00:16:31.128 --- 10.0.0.1 ping statistics --- 00:16:31.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.128 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:31.128 04:32:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2767277 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2767277 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 2767277 ']' 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:31.128 [2024-07-14 04:32:51.046789] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:31.128 [2024-07-14 04:32:51.046886] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.128 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.128 [2024-07-14 04:32:51.111640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.128 [2024-07-14 04:32:51.200093] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.128 [2024-07-14 04:32:51.200169] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.128 [2024-07-14 04:32:51.200183] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.128 [2024-07-14 04:32:51.200194] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.128 [2024-07-14 04:32:51.200203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.128 [2024-07-14 04:32:51.200315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.128 [2024-07-14 04:32:51.201887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.128 [2024-07-14 04:32:51.201899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.128 04:32:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:31.387 04:32:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.387 04:32:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:31.644 [2024-07-14 04:32:51.622835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.644 04:32:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:31.902 04:32:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:31.902 04:32:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:32.160 04:32:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:32.160 04:32:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:32.418 04:32:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:32.676 04:32:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f28abeab-f9ae-4e7b-b395-35bff74ca60c 00:16:32.676 04:32:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f28abeab-f9ae-4e7b-b395-35bff74ca60c lvol 20 00:16:32.934 04:32:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0e8d6ef6-a62d-4a24-9749-4c51ba7273c4 00:16:32.934 04:32:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:33.190 04:32:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0e8d6ef6-a62d-4a24-9749-4c51ba7273c4 00:16:33.448 04:32:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:33.705 [2024-07-14 04:32:53.869883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.705 04:32:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:34.275 04:32:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2767704 00:16:34.275 04:32:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:34.275 04:32:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:34.275 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.212 04:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0e8d6ef6-a62d-4a24-9749-4c51ba7273c4 MY_SNAPSHOT 00:16:35.469 04:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=14c4b561-f7d6-4fa1-9987-43c41fa744db 00:16:35.469 04:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0e8d6ef6-a62d-4a24-9749-4c51ba7273c4 30 00:16:35.726 04:32:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 14c4b561-f7d6-4fa1-9987-43c41fa744db MY_CLONE 00:16:35.984 04:32:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1d95f83b-08e3-4d8e-b17f-ce9df3671296 00:16:35.984 04:32:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1d95f83b-08e3-4d8e-b17f-ce9df3671296 00:16:36.549 04:32:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2767704 00:16:44.657 Initializing NVMe Controllers 00:16:44.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:44.657 Controller IO queue size 128, less than required. 00:16:44.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:44.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:44.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:44.657 Initialization complete. Launching workers. 00:16:44.657 ======================================================== 00:16:44.657 Latency(us) 00:16:44.657 Device Information : IOPS MiB/s Average min max 00:16:44.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10609.15 41.44 12075.06 1871.82 113546.32 00:16:44.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10460.16 40.86 12239.57 2404.40 51150.07 00:16:44.657 ======================================================== 00:16:44.657 Total : 21069.31 82.30 12156.74 1871.82 113546.32 00:16:44.657 00:16:44.657 04:33:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:44.657 04:33:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0e8d6ef6-a62d-4a24-9749-4c51ba7273c4 00:16:44.915 04:33:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f28abeab-f9ae-4e7b-b395-35bff74ca60c 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.174 rmmod nvme_tcp 00:16:45.174 rmmod nvme_fabrics 00:16:45.174 rmmod nvme_keyring 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2767277 ']' 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2767277 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 2767277 ']' 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 2767277 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2767277 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2767277' 00:16:45.174 killing process with pid 2767277 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 2767277 00:16:45.174 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 2767277 00:16:45.433 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.433 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:45.433 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:45.433 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.433 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.433 04:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.433 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.433 04:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:47.968 00:16:47.968 real 0m18.772s 00:16:47.968 user 1m4.212s 00:16:47.968 sys 0m5.657s 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:47.968 ************************************ 00:16:47.968 END TEST nvmf_lvol 00:16:47.968 ************************************ 00:16:47.968 04:33:07 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:47.968 04:33:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:47.968 04:33:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:47.968 04:33:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:47.968 ************************************ 00:16:47.968 START TEST nvmf_lvs_grow 00:16:47.968 ************************************ 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:47.968 * Looking for test storage... 00:16:47.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.968 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:47.969 04:33:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:49.347 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:49.347 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:49.347 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:49.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:49.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:49.348 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:49.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:16:49.607 00:16:49.607 --- 10.0.0.2 ping statistics --- 00:16:49.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.607 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:16:49.607 00:16:49.607 --- 10.0.0.1 ping statistics --- 00:16:49.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.607 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2771453 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2771453 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 2771453 ']' 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:49.607 04:33:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:49.607 [2024-07-14 04:33:09.739874] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:49.607 [2024-07-14 04:33:09.739979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.607 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.867 [2024-07-14 04:33:09.814943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.867 [2024-07-14 04:33:09.907537] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.867 [2024-07-14 04:33:09.907596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.867 [2024-07-14 04:33:09.907612] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.867 [2024-07-14 04:33:09.907625] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.867 [2024-07-14 04:33:09.907636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.867 [2024-07-14 04:33:09.907665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.867 04:33:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:49.867 04:33:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:49.867 04:33:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.867 04:33:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.867 04:33:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:49.867 04:33:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.867 04:33:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.126 [2024-07-14 04:33:10.276281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.126 04:33:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:50.126 04:33:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:50.126 04:33:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:50.126 04:33:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:50.385 ************************************ 00:16:50.385 START TEST lvs_grow_clean 00:16:50.385 ************************************ 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:50.385 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:50.645 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:50.645 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:50.941 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=11d95694-99c3-469e-b407-d8a5cbb561fc 00:16:50.941 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:16:50.941 04:33:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:51.200 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:51.200 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:51.200 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 11d95694-99c3-469e-b407-d8a5cbb561fc lvol 150 00:16:51.200 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7ad2b2bd-682b-42ec-9f88-eca182339e1c 00:16:51.200 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:51.200 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:51.459 [2024-07-14 04:33:11.613031] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:51.459 [2024-07-14 04:33:11.613123] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:51.459 true 00:16:51.459 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:16:51.459 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:51.718 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:51.718 04:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:51.977 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7ad2b2bd-682b-42ec-9f88-eca182339e1c 00:16:52.233 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:52.490 [2024-07-14 04:33:12.640152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.490 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2771891 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2771891 /var/tmp/bdevperf.sock 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 2771891 ']' 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.747 04:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:53.004 [2024-07-14 04:33:12.946480] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:53.005 [2024-07-14 04:33:12.946564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771891 ] 00:16:53.005 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.005 [2024-07-14 04:33:13.013025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.005 [2024-07-14 04:33:13.103944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.262 04:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:53.262 04:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:53.262 04:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:53.520 Nvme0n1 00:16:53.520 04:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:53.778 [ 00:16:53.778 { 00:16:53.778 "name": "Nvme0n1", 00:16:53.778 "aliases": [ 00:16:53.778 "7ad2b2bd-682b-42ec-9f88-eca182339e1c" 00:16:53.778 ], 00:16:53.778 "product_name": "NVMe disk", 00:16:53.778 "block_size": 4096, 00:16:53.778 "num_blocks": 38912, 00:16:53.778 "uuid": "7ad2b2bd-682b-42ec-9f88-eca182339e1c", 00:16:53.778 "assigned_rate_limits": { 00:16:53.778 "rw_ios_per_sec": 0, 00:16:53.778 "rw_mbytes_per_sec": 0, 00:16:53.778 "r_mbytes_per_sec": 0, 00:16:53.778 "w_mbytes_per_sec": 0 00:16:53.778 }, 00:16:53.778 "claimed": false, 00:16:53.778 "zoned": false, 00:16:53.778 "supported_io_types": { 00:16:53.778 "read": true, 00:16:53.778 "write": true, 00:16:53.778 "unmap": true, 00:16:53.778 "write_zeroes": true, 00:16:53.778 "flush": true, 00:16:53.778 "reset": true, 00:16:53.778 "compare": true, 00:16:53.778 "compare_and_write": true, 00:16:53.778 "abort": true, 00:16:53.778 "nvme_admin": true, 00:16:53.778 "nvme_io": true 00:16:53.778 }, 00:16:53.778 "memory_domains": [ 00:16:53.778 { 00:16:53.778 "dma_device_id": "system", 00:16:53.778 "dma_device_type": 1 00:16:53.778 } 00:16:53.778 ], 00:16:53.778 "driver_specific": { 00:16:53.778 "nvme": [ 00:16:53.778 { 00:16:53.778 "trid": { 00:16:53.778 "trtype": "TCP", 00:16:53.778 "adrfam": "IPv4", 00:16:53.778 "traddr": "10.0.0.2", 00:16:53.778 "trsvcid": "4420", 00:16:53.778 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:53.778 }, 00:16:53.778 "ctrlr_data": { 00:16:53.778 "cntlid": 1, 00:16:53.778 "vendor_id": "0x8086", 00:16:53.778 "model_number": "SPDK bdev Controller", 00:16:53.778 "serial_number": "SPDK0", 00:16:53.778 "firmware_revision": "24.05.1", 00:16:53.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.779 "oacs": { 00:16:53.779 "security": 0, 00:16:53.779 "format": 0, 00:16:53.779 "firmware": 0, 00:16:53.779 "ns_manage": 0 00:16:53.779 }, 00:16:53.779 "multi_ctrlr": true, 00:16:53.779 "ana_reporting": false 00:16:53.779 }, 00:16:53.779 "vs": { 00:16:53.779 "nvme_version": "1.3" 00:16:53.779 }, 00:16:53.779 "ns_data": { 00:16:53.779 "id": 1, 00:16:53.779 "can_share": true 00:16:53.779 } 00:16:53.779 } 00:16:53.779 ], 00:16:53.779 "mp_policy": "active_passive" 00:16:53.779 } 00:16:53.779 } 00:16:53.779 ] 00:16:53.779 04:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2772025 00:16:53.779 04:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.779 04:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:54.036 Running I/O for 10 seconds... 00:16:54.977 Latency(us) 00:16:54.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.977 Nvme0n1 : 1.00 13839.00 54.06 0.00 0.00 0.00 0.00 0.00 00:16:54.977 =================================================================================================================== 00:16:54.977 Total : 13839.00 54.06 0.00 0.00 0.00 0.00 0.00 00:16:54.977 00:16:55.935 04:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:16:55.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.935 Nvme0n1 : 2.00 14023.50 54.78 0.00 0.00 0.00 0.00 0.00 00:16:55.935 =================================================================================================================== 00:16:55.935 Total : 14023.50 54.78 0.00 0.00 0.00 0.00 0.00 00:16:55.935 00:16:56.192 true 00:16:56.192 04:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:16:56.192 04:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:56.450 04:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:56.450 04:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:56.450 04:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2772025 00:16:57.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.018 Nvme0n1 : 3.00 14127.67 55.19 0.00 0.00 0.00 0.00 0.00 00:16:57.018 =================================================================================================================== 00:16:57.018 Total : 14127.67 55.19 0.00 0.00 0.00 0.00 0.00 00:16:57.018 00:16:57.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.955 Nvme0n1 : 4.00 14195.50 55.45 0.00 0.00 0.00 0.00 0.00 00:16:57.955 =================================================================================================================== 00:16:57.955 Total : 14195.50 55.45 0.00 0.00 0.00 0.00 0.00 00:16:57.955 00:16:58.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.894 Nvme0n1 : 5.00 14249.20 55.66 0.00 0.00 0.00 0.00 0.00 00:16:58.894 =================================================================================================================== 00:16:58.894 Total : 14249.20 55.66 0.00 0.00 0.00 0.00 0.00 00:16:58.894 00:16:59.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.829 Nvme0n1 : 6.00 14285.00 55.80 0.00 0.00 0.00 0.00 0.00 00:16:59.829 =================================================================================================================== 00:16:59.829 Total : 14285.00 55.80 0.00 0.00 0.00 0.00 0.00 00:16:59.829 00:17:01.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.227 Nvme0n1 : 7.00 14347.00 56.04 0.00 0.00 0.00 0.00 0.00 00:17:01.227 =================================================================================================================== 00:17:01.227 Total : 14347.00 56.04 0.00 0.00 0.00 0.00 0.00 00:17:01.227 00:17:02.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.164 Nvme0n1 : 8.00 14377.62 56.16 0.00 0.00 0.00 0.00 0.00 00:17:02.164 =================================================================================================================== 00:17:02.164 Total : 14377.62 56.16 0.00 0.00 0.00 0.00 0.00 00:17:02.164 00:17:03.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.102 Nvme0n1 : 9.00 14401.44 56.26 0.00 0.00 0.00 0.00 0.00 00:17:03.102 =================================================================================================================== 00:17:03.103 Total : 14401.44 56.26 0.00 0.00 0.00 0.00 0.00 00:17:03.103 00:17:04.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.037 Nvme0n1 : 10.00 14420.50 56.33 0.00 0.00 0.00 0.00 0.00 00:17:04.037 =================================================================================================================== 00:17:04.037 Total : 14420.50 56.33 0.00 0.00 0.00 0.00 0.00 00:17:04.037 00:17:04.037 00:17:04.037 Latency(us) 00:17:04.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.037 Nvme0n1 : 10.01 14424.40 56.35 0.00 0.00 8868.03 5315.70 21554.06 00:17:04.037 =================================================================================================================== 00:17:04.037 Total : 14424.40 56.35 0.00 0.00 8868.03 5315.70 21554.06 00:17:04.037 0 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2771891 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 2771891 ']' 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 2771891 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2771891 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2771891' 00:17:04.037 killing process with pid 2771891 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 2771891 00:17:04.037 Received shutdown signal, test time was about 10.000000 seconds 00:17:04.037 00:17:04.037 Latency(us) 00:17:04.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.037 =================================================================================================================== 00:17:04.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:04.037 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 2771891 00:17:04.298 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:04.557 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:04.814 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:17:04.814 04:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:05.073 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:05.073 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:05.073 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:05.332 [2024-07-14 04:33:25.401757] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:05.332 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:17:05.590 request: 00:17:05.590 { 00:17:05.590 "uuid": "11d95694-99c3-469e-b407-d8a5cbb561fc", 00:17:05.590 "method": "bdev_lvol_get_lvstores", 00:17:05.590 "req_id": 1 00:17:05.590 } 00:17:05.590 Got JSON-RPC error response 00:17:05.590 response: 00:17:05.590 { 00:17:05.590 "code": -19, 00:17:05.590 "message": "No such device" 00:17:05.590 } 00:17:05.590 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:05.590 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:05.590 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:05.590 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:05.590 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:05.848 aio_bdev 00:17:05.848 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7ad2b2bd-682b-42ec-9f88-eca182339e1c 00:17:05.848 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=7ad2b2bd-682b-42ec-9f88-eca182339e1c 00:17:05.848 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:05.848 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:05.848 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:05.848 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:05.848 04:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:06.106 04:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7ad2b2bd-682b-42ec-9f88-eca182339e1c -t 2000 00:17:06.400 [ 00:17:06.400 { 00:17:06.400 "name": "7ad2b2bd-682b-42ec-9f88-eca182339e1c", 00:17:06.400 "aliases": [ 00:17:06.400 "lvs/lvol" 00:17:06.400 ], 00:17:06.400 "product_name": "Logical Volume", 00:17:06.400 "block_size": 4096, 00:17:06.400 "num_blocks": 38912, 00:17:06.400 "uuid": "7ad2b2bd-682b-42ec-9f88-eca182339e1c", 00:17:06.400 "assigned_rate_limits": { 00:17:06.400 "rw_ios_per_sec": 0, 00:17:06.400 "rw_mbytes_per_sec": 0, 00:17:06.400 "r_mbytes_per_sec": 0, 00:17:06.400 "w_mbytes_per_sec": 0 00:17:06.400 }, 00:17:06.401 "claimed": false, 00:17:06.401 "zoned": false, 00:17:06.401 "supported_io_types": { 00:17:06.401 "read": true, 00:17:06.401 "write": true, 00:17:06.401 "unmap": true, 00:17:06.401 "write_zeroes": true, 00:17:06.401 "flush": false, 00:17:06.401 "reset": true, 00:17:06.401 "compare": false, 00:17:06.401 "compare_and_write": false, 00:17:06.401 "abort": false, 00:17:06.401 "nvme_admin": false, 00:17:06.401 "nvme_io": false 00:17:06.401 }, 00:17:06.401 "driver_specific": { 00:17:06.401 "lvol": { 00:17:06.401 "lvol_store_uuid": "11d95694-99c3-469e-b407-d8a5cbb561fc", 00:17:06.401 "base_bdev": "aio_bdev", 00:17:06.401 "thin_provision": false, 00:17:06.401 "num_allocated_clusters": 38, 00:17:06.401 "snapshot": false, 00:17:06.401 "clone": false, 00:17:06.401 "esnap_clone": false 00:17:06.401 } 00:17:06.401 } 00:17:06.401 } 00:17:06.401 ] 00:17:06.683 04:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:06.683 04:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:17:06.683 04:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:06.683 04:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:06.683 04:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:17:06.683 04:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:06.941 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:06.941 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7ad2b2bd-682b-42ec-9f88-eca182339e1c 00:17:07.201 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 11d95694-99c3-469e-b407-d8a5cbb561fc 00:17:07.460 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:07.719 00:17:07.719 real 0m17.504s 00:17:07.719 user 0m16.970s 00:17:07.719 sys 0m1.921s 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:07.719 ************************************ 00:17:07.719 END TEST lvs_grow_clean 00:17:07.719 ************************************ 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:07.719 ************************************ 00:17:07.719 START TEST lvs_grow_dirty 00:17:07.719 ************************************ 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:07.719 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:07.720 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:07.720 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:07.720 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:07.720 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:07.720 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:07.720 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:07.720 04:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:08.288 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:08.288 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:08.288 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=22e56d0c-d948-433d-b8eb-e8252715686c 00:17:08.288 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:08.288 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:08.546 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:08.546 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:08.546 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 22e56d0c-d948-433d-b8eb-e8252715686c lvol 150 00:17:08.805 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=99d288b8-6215-40bf-86be-0bd1fd831dff 00:17:08.805 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.805 04:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:09.064 [2024-07-14 04:33:29.190068] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:09.064 [2024-07-14 04:33:29.190177] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:09.064 true 00:17:09.064 04:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:09.064 04:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:09.324 04:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:09.324 04:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:09.584 04:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 99d288b8-6215-40bf-86be-0bd1fd831dff 00:17:09.843 04:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:10.101 [2024-07-14 04:33:30.165038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.101 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:10.359 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2774058 00:17:10.359 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.359 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2774058 /var/tmp/bdevperf.sock 00:17:10.360 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:10.360 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2774058 ']' 00:17:10.360 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.360 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:10.360 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.360 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:10.360 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:10.360 [2024-07-14 04:33:30.497526] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:10.360 [2024-07-14 04:33:30.497599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774058 ] 00:17:10.360 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.626 [2024-07-14 04:33:30.559814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.627 [2024-07-14 04:33:30.649803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.627 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:10.627 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:10.627 04:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:11.197 Nvme0n1 00:17:11.197 04:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:11.197 [ 00:17:11.197 { 00:17:11.197 "name": "Nvme0n1", 00:17:11.197 "aliases": [ 00:17:11.197 "99d288b8-6215-40bf-86be-0bd1fd831dff" 00:17:11.197 ], 00:17:11.197 "product_name": "NVMe disk", 00:17:11.197 "block_size": 4096, 00:17:11.197 "num_blocks": 38912, 00:17:11.197 "uuid": "99d288b8-6215-40bf-86be-0bd1fd831dff", 00:17:11.197 "assigned_rate_limits": { 00:17:11.197 "rw_ios_per_sec": 0, 00:17:11.197 "rw_mbytes_per_sec": 0, 00:17:11.197 "r_mbytes_per_sec": 0, 00:17:11.197 "w_mbytes_per_sec": 0 00:17:11.197 }, 00:17:11.197 "claimed": false, 00:17:11.197 "zoned": false, 00:17:11.197 "supported_io_types": { 00:17:11.197 "read": true, 00:17:11.197 "write": true, 00:17:11.197 "unmap": true, 00:17:11.197 "write_zeroes": true, 00:17:11.197 "flush": true, 00:17:11.197 "reset": true, 00:17:11.197 "compare": true, 00:17:11.197 "compare_and_write": true, 00:17:11.197 "abort": true, 00:17:11.197 "nvme_admin": true, 00:17:11.197 "nvme_io": true 00:17:11.197 }, 00:17:11.197 "memory_domains": [ 00:17:11.197 { 00:17:11.197 "dma_device_id": "system", 00:17:11.197 "dma_device_type": 1 00:17:11.197 } 00:17:11.197 ], 00:17:11.197 "driver_specific": { 00:17:11.197 "nvme": [ 00:17:11.197 { 00:17:11.197 "trid": { 00:17:11.197 "trtype": "TCP", 00:17:11.197 "adrfam": "IPv4", 00:17:11.197 "traddr": "10.0.0.2", 00:17:11.197 "trsvcid": "4420", 00:17:11.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:11.198 }, 00:17:11.198 "ctrlr_data": { 00:17:11.198 "cntlid": 1, 00:17:11.198 "vendor_id": "0x8086", 00:17:11.198 "model_number": "SPDK bdev Controller", 00:17:11.198 "serial_number": "SPDK0", 00:17:11.198 "firmware_revision": "24.05.1", 00:17:11.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:11.198 "oacs": { 00:17:11.198 "security": 0, 00:17:11.198 "format": 0, 00:17:11.198 "firmware": 0, 00:17:11.198 "ns_manage": 0 00:17:11.198 }, 00:17:11.198 "multi_ctrlr": true, 00:17:11.198 "ana_reporting": false 00:17:11.198 }, 00:17:11.198 "vs": { 00:17:11.198 "nvme_version": "1.3" 00:17:11.198 }, 00:17:11.198 "ns_data": { 00:17:11.198 "id": 1, 00:17:11.198 "can_share": true 00:17:11.198 } 00:17:11.198 } 00:17:11.198 ], 00:17:11.198 "mp_policy": "active_passive" 00:17:11.198 } 00:17:11.198 } 00:17:11.198 ] 00:17:11.198 04:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2774126 00:17:11.198 04:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:11.198 04:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:11.457 Running I/O for 10 seconds... 00:17:12.394 Latency(us) 00:17:12.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.394 Nvme0n1 : 1.00 13099.00 51.17 0.00 0.00 0.00 0.00 0.00 00:17:12.394 =================================================================================================================== 00:17:12.394 Total : 13099.00 51.17 0.00 0.00 0.00 0.00 0.00 00:17:12.394 00:17:13.350 04:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:13.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.350 Nvme0n1 : 2.00 13265.50 51.82 0.00 0.00 0.00 0.00 0.00 00:17:13.350 =================================================================================================================== 00:17:13.350 Total : 13265.50 51.82 0.00 0.00 0.00 0.00 0.00 00:17:13.350 00:17:13.607 true 00:17:13.607 04:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:13.607 04:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:13.865 04:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:13.865 04:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:13.865 04:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2774126 00:17:14.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.435 Nvme0n1 : 3.00 13323.67 52.05 0.00 0.00 0.00 0.00 0.00 00:17:14.436 =================================================================================================================== 00:17:14.436 Total : 13323.67 52.05 0.00 0.00 0.00 0.00 0.00 00:17:14.436 00:17:15.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.375 Nvme0n1 : 4.00 13372.75 52.24 0.00 0.00 0.00 0.00 0.00 00:17:15.375 =================================================================================================================== 00:17:15.375 Total : 13372.75 52.24 0.00 0.00 0.00 0.00 0.00 00:17:15.375 00:17:16.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.314 Nvme0n1 : 5.00 13431.00 52.46 0.00 0.00 0.00 0.00 0.00 00:17:16.314 =================================================================================================================== 00:17:16.314 Total : 13431.00 52.46 0.00 0.00 0.00 0.00 0.00 00:17:16.314 00:17:17.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.692 Nvme0n1 : 6.00 13480.50 52.66 0.00 0.00 0.00 0.00 0.00 00:17:17.692 =================================================================================================================== 00:17:17.692 Total : 13480.50 52.66 0.00 0.00 0.00 0.00 0.00 00:17:17.692 00:17:18.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.629 Nvme0n1 : 7.00 13526.14 52.84 0.00 0.00 0.00 0.00 0.00 00:17:18.629 =================================================================================================================== 00:17:18.629 Total : 13526.14 52.84 0.00 0.00 0.00 0.00 0.00 00:17:18.629 00:17:19.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.565 Nvme0n1 : 8.00 13563.38 52.98 0.00 0.00 0.00 0.00 0.00 00:17:19.565 =================================================================================================================== 00:17:19.565 Total : 13563.38 52.98 0.00 0.00 0.00 0.00 0.00 00:17:19.565 00:17:20.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.500 Nvme0n1 : 9.00 13624.33 53.22 0.00 0.00 0.00 0.00 0.00 00:17:20.501 =================================================================================================================== 00:17:20.501 Total : 13624.33 53.22 0.00 0.00 0.00 0.00 0.00 00:17:20.501 00:17:21.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.437 Nvme0n1 : 10.00 13647.50 53.31 0.00 0.00 0.00 0.00 0.00 00:17:21.437 =================================================================================================================== 00:17:21.437 Total : 13647.50 53.31 0.00 0.00 0.00 0.00 0.00 00:17:21.437 00:17:21.437 00:17:21.437 Latency(us) 00:17:21.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.437 Nvme0n1 : 10.01 13647.26 53.31 0.00 0.00 9366.27 6699.24 14369.37 00:17:21.437 =================================================================================================================== 00:17:21.437 Total : 13647.26 53.31 0.00 0.00 9366.27 6699.24 14369.37 00:17:21.437 0 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2774058 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 2774058 ']' 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 2774058 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2774058 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2774058' 00:17:21.437 killing process with pid 2774058 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 2774058 00:17:21.437 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.437 00:17:21.437 Latency(us) 00:17:21.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.437 =================================================================================================================== 00:17:21.437 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.437 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 2774058 00:17:21.694 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.952 04:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:22.238 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:22.238 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2771453 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2771453 00:17:22.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2771453 Killed "${NVMF_APP[@]}" "$@" 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2775412 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2775412 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2775412 ']' 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:22.495 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:22.495 [2024-07-14 04:33:42.578574] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:22.495 [2024-07-14 04:33:42.578670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.495 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.495 [2024-07-14 04:33:42.646748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.752 [2024-07-14 04:33:42.733772] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.752 [2024-07-14 04:33:42.733828] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.752 [2024-07-14 04:33:42.733856] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.752 [2024-07-14 04:33:42.733876] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.752 [2024-07-14 04:33:42.733887] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.752 [2024-07-14 04:33:42.733934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.752 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:22.752 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:22.752 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.752 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.752 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:22.752 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.752 04:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:23.008 [2024-07-14 04:33:43.124358] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:23.008 [2024-07-14 04:33:43.124505] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:23.008 [2024-07-14 04:33:43.124553] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:23.008 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:23.008 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 99d288b8-6215-40bf-86be-0bd1fd831dff 00:17:23.008 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=99d288b8-6215-40bf-86be-0bd1fd831dff 00:17:23.008 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:23.008 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:23.008 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:23.008 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:23.008 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:23.264 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 99d288b8-6215-40bf-86be-0bd1fd831dff -t 2000 00:17:23.826 [ 00:17:23.826 { 00:17:23.826 "name": "99d288b8-6215-40bf-86be-0bd1fd831dff", 00:17:23.826 "aliases": [ 00:17:23.826 "lvs/lvol" 00:17:23.826 ], 00:17:23.826 "product_name": "Logical Volume", 00:17:23.826 "block_size": 4096, 00:17:23.826 "num_blocks": 38912, 00:17:23.826 "uuid": "99d288b8-6215-40bf-86be-0bd1fd831dff", 00:17:23.826 "assigned_rate_limits": { 00:17:23.826 "rw_ios_per_sec": 0, 00:17:23.826 "rw_mbytes_per_sec": 0, 00:17:23.826 "r_mbytes_per_sec": 0, 00:17:23.826 "w_mbytes_per_sec": 0 00:17:23.826 }, 00:17:23.826 "claimed": false, 00:17:23.826 "zoned": false, 00:17:23.826 "supported_io_types": { 00:17:23.826 "read": true, 00:17:23.826 "write": true, 00:17:23.826 "unmap": true, 00:17:23.826 "write_zeroes": true, 00:17:23.826 "flush": false, 00:17:23.826 "reset": true, 00:17:23.826 "compare": false, 00:17:23.826 "compare_and_write": false, 00:17:23.826 "abort": false, 00:17:23.826 "nvme_admin": false, 00:17:23.826 "nvme_io": false 00:17:23.826 }, 00:17:23.826 "driver_specific": { 00:17:23.826 "lvol": { 00:17:23.826 "lvol_store_uuid": "22e56d0c-d948-433d-b8eb-e8252715686c", 00:17:23.826 "base_bdev": "aio_bdev", 00:17:23.826 "thin_provision": false, 00:17:23.826 "num_allocated_clusters": 38, 00:17:23.826 "snapshot": false, 00:17:23.826 "clone": false, 00:17:23.826 "esnap_clone": false 00:17:23.826 } 00:17:23.826 } 00:17:23.826 } 00:17:23.826 ] 00:17:23.826 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:23.826 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:23.826 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:23.826 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:23.826 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:23.826 04:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:24.392 [2024-07-14 04:33:44.505707] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:24.392 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:24.648 request: 00:17:24.648 { 00:17:24.648 "uuid": "22e56d0c-d948-433d-b8eb-e8252715686c", 00:17:24.648 "method": "bdev_lvol_get_lvstores", 00:17:24.648 "req_id": 1 00:17:24.648 } 00:17:24.648 Got JSON-RPC error response 00:17:24.648 response: 00:17:24.648 { 00:17:24.648 "code": -19, 00:17:24.648 "message": "No such device" 00:17:24.648 } 00:17:24.648 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:24.648 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.648 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.648 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.648 04:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:24.904 aio_bdev 00:17:24.904 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 99d288b8-6215-40bf-86be-0bd1fd831dff 00:17:24.904 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=99d288b8-6215-40bf-86be-0bd1fd831dff 00:17:24.904 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:24.904 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:24.904 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:24.904 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:24.904 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:25.162 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 99d288b8-6215-40bf-86be-0bd1fd831dff -t 2000 00:17:25.419 [ 00:17:25.419 { 00:17:25.419 "name": "99d288b8-6215-40bf-86be-0bd1fd831dff", 00:17:25.419 "aliases": [ 00:17:25.419 "lvs/lvol" 00:17:25.419 ], 00:17:25.419 "product_name": "Logical Volume", 00:17:25.419 "block_size": 4096, 00:17:25.419 "num_blocks": 38912, 00:17:25.419 "uuid": "99d288b8-6215-40bf-86be-0bd1fd831dff", 00:17:25.419 "assigned_rate_limits": { 00:17:25.419 "rw_ios_per_sec": 0, 00:17:25.419 "rw_mbytes_per_sec": 0, 00:17:25.419 "r_mbytes_per_sec": 0, 00:17:25.419 "w_mbytes_per_sec": 0 00:17:25.419 }, 00:17:25.419 "claimed": false, 00:17:25.419 "zoned": false, 00:17:25.419 "supported_io_types": { 00:17:25.419 "read": true, 00:17:25.419 "write": true, 00:17:25.419 "unmap": true, 00:17:25.419 "write_zeroes": true, 00:17:25.419 "flush": false, 00:17:25.419 "reset": true, 00:17:25.419 "compare": false, 00:17:25.419 "compare_and_write": false, 00:17:25.419 "abort": false, 00:17:25.419 "nvme_admin": false, 00:17:25.419 "nvme_io": false 00:17:25.419 }, 00:17:25.419 "driver_specific": { 00:17:25.419 "lvol": { 00:17:25.419 "lvol_store_uuid": "22e56d0c-d948-433d-b8eb-e8252715686c", 00:17:25.419 "base_bdev": "aio_bdev", 00:17:25.419 "thin_provision": false, 00:17:25.419 "num_allocated_clusters": 38, 00:17:25.419 "snapshot": false, 00:17:25.419 "clone": false, 00:17:25.419 "esnap_clone": false 00:17:25.419 } 00:17:25.419 } 00:17:25.419 } 00:17:25.419 ] 00:17:25.419 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:25.419 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:25.420 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:25.678 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:25.678 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:25.678 04:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:25.936 04:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:25.936 04:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 99d288b8-6215-40bf-86be-0bd1fd831dff 00:17:26.195 04:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22e56d0c-d948-433d-b8eb-e8252715686c 00:17:26.453 04:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.711 00:17:26.711 real 0m18.968s 00:17:26.711 user 0m42.928s 00:17:26.711 sys 0m6.840s 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:26.711 ************************************ 00:17:26.711 END TEST lvs_grow_dirty 00:17:26.711 ************************************ 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:26.711 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:26.711 nvmf_trace.0 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.969 rmmod nvme_tcp 00:17:26.969 rmmod nvme_fabrics 00:17:26.969 rmmod nvme_keyring 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.969 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:26.970 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:26.970 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2775412 ']' 00:17:26.970 04:33:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2775412 00:17:26.970 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 2775412 ']' 00:17:26.970 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 2775412 00:17:26.970 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:26.970 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:26.970 04:33:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2775412 00:17:26.970 04:33:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:26.970 04:33:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:26.970 04:33:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2775412' 00:17:26.970 killing process with pid 2775412 00:17:26.970 04:33:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 2775412 00:17:26.970 04:33:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 2775412 00:17:27.230 04:33:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.230 04:33:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:27.230 04:33:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:27.230 04:33:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.230 04:33:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.230 04:33:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.230 04:33:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.230 04:33:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.135 04:33:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:29.135 00:17:29.135 real 0m41.612s 00:17:29.135 user 1m5.575s 00:17:29.135 sys 0m10.544s 00:17:29.135 04:33:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:29.135 04:33:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.135 ************************************ 00:17:29.135 END TEST nvmf_lvs_grow 00:17:29.135 ************************************ 00:17:29.135 04:33:49 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:29.135 04:33:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:29.135 04:33:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:29.135 04:33:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:29.395 ************************************ 00:17:29.395 START TEST nvmf_bdev_io_wait 00:17:29.395 ************************************ 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:29.395 * Looking for test storage... 00:17:29.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:29.395 04:33:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:31.301 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:31.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:31.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:31.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:31.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:31.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:17:31.302 00:17:31.302 --- 10.0.0.2 ping statistics --- 00:17:31.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.302 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:17:31.302 00:17:31.302 --- 10.0.0.1 ping statistics --- 00:17:31.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.302 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2777927 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2777927 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 2777927 ']' 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:31.302 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.560 [2024-07-14 04:33:51.517318] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:31.560 [2024-07-14 04:33:51.517402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.560 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.560 [2024-07-14 04:33:51.585937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.560 [2024-07-14 04:33:51.679496] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.560 [2024-07-14 04:33:51.679553] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.560 [2024-07-14 04:33:51.679571] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.560 [2024-07-14 04:33:51.679584] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.560 [2024-07-14 04:33:51.679596] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.560 [2024-07-14 04:33:51.679953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.560 [2024-07-14 04:33:51.679981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.560 [2024-07-14 04:33:51.680030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.560 [2024-07-14 04:33:51.680033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.560 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.818 [2024-07-14 04:33:51.819626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.818 Malloc0 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.818 [2024-07-14 04:33:51.880288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2778072 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2778074 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:31.818 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:31.819 { 00:17:31.819 "params": { 00:17:31.819 "name": "Nvme$subsystem", 00:17:31.819 "trtype": "$TEST_TRANSPORT", 00:17:31.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.819 "adrfam": "ipv4", 00:17:31.819 "trsvcid": "$NVMF_PORT", 00:17:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.819 "hdgst": ${hdgst:-false}, 00:17:31.819 "ddgst": ${ddgst:-false} 00:17:31.819 }, 00:17:31.819 "method": "bdev_nvme_attach_controller" 00:17:31.819 } 00:17:31.819 EOF 00:17:31.819 )") 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2778076 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:31.819 { 00:17:31.819 "params": { 00:17:31.819 "name": "Nvme$subsystem", 00:17:31.819 "trtype": "$TEST_TRANSPORT", 00:17:31.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.819 "adrfam": "ipv4", 00:17:31.819 "trsvcid": "$NVMF_PORT", 00:17:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.819 "hdgst": ${hdgst:-false}, 00:17:31.819 "ddgst": ${ddgst:-false} 00:17:31.819 }, 00:17:31.819 "method": "bdev_nvme_attach_controller" 00:17:31.819 } 00:17:31.819 EOF 00:17:31.819 )") 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2778079 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:31.819 { 00:17:31.819 "params": { 00:17:31.819 "name": "Nvme$subsystem", 00:17:31.819 "trtype": "$TEST_TRANSPORT", 00:17:31.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.819 "adrfam": "ipv4", 00:17:31.819 "trsvcid": "$NVMF_PORT", 00:17:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.819 "hdgst": ${hdgst:-false}, 00:17:31.819 "ddgst": ${ddgst:-false} 00:17:31.819 }, 00:17:31.819 "method": "bdev_nvme_attach_controller" 00:17:31.819 } 00:17:31.819 EOF 00:17:31.819 )") 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:31.819 { 00:17:31.819 "params": { 00:17:31.819 "name": "Nvme$subsystem", 00:17:31.819 "trtype": "$TEST_TRANSPORT", 00:17:31.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.819 "adrfam": "ipv4", 00:17:31.819 "trsvcid": "$NVMF_PORT", 00:17:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.819 "hdgst": ${hdgst:-false}, 00:17:31.819 "ddgst": ${ddgst:-false} 00:17:31.819 }, 00:17:31.819 "method": "bdev_nvme_attach_controller" 00:17:31.819 } 00:17:31.819 EOF 00:17:31.819 )") 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2778072 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:31.819 "params": { 00:17:31.819 "name": "Nvme1", 00:17:31.819 "trtype": "tcp", 00:17:31.819 "traddr": "10.0.0.2", 00:17:31.819 "adrfam": "ipv4", 00:17:31.819 "trsvcid": "4420", 00:17:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.819 "hdgst": false, 00:17:31.819 "ddgst": false 00:17:31.819 }, 00:17:31.819 "method": "bdev_nvme_attach_controller" 00:17:31.819 }' 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:31.819 "params": { 00:17:31.819 "name": "Nvme1", 00:17:31.819 "trtype": "tcp", 00:17:31.819 "traddr": "10.0.0.2", 00:17:31.819 "adrfam": "ipv4", 00:17:31.819 "trsvcid": "4420", 00:17:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.819 "hdgst": false, 00:17:31.819 "ddgst": false 00:17:31.819 }, 00:17:31.819 "method": "bdev_nvme_attach_controller" 00:17:31.819 }' 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:31.819 "params": { 00:17:31.819 "name": "Nvme1", 00:17:31.819 "trtype": "tcp", 00:17:31.819 "traddr": "10.0.0.2", 00:17:31.819 "adrfam": "ipv4", 00:17:31.819 "trsvcid": "4420", 00:17:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.819 "hdgst": false, 00:17:31.819 "ddgst": false 00:17:31.819 }, 00:17:31.819 "method": "bdev_nvme_attach_controller" 00:17:31.819 }' 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:31.819 04:33:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:31.819 "params": { 00:17:31.819 "name": "Nvme1", 00:17:31.819 "trtype": "tcp", 00:17:31.819 "traddr": "10.0.0.2", 00:17:31.819 "adrfam": "ipv4", 00:17:31.819 "trsvcid": "4420", 00:17:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.819 "hdgst": false, 00:17:31.819 "ddgst": false 00:17:31.819 }, 00:17:31.819 "method": "bdev_nvme_attach_controller" 00:17:31.819 }' 00:17:31.819 [2024-07-14 04:33:51.926391] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:31.819 [2024-07-14 04:33:51.926392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:31.819 [2024-07-14 04:33:51.926391] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:31.819 [2024-07-14 04:33:51.926483] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 04:33:51.926483] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 04:33:51.926484] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:31.819 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:31.819 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:31.819 [2024-07-14 04:33:51.927993] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:31.819 [2024-07-14 04:33:51.928066] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:31.819 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.077 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.077 [2024-07-14 04:33:52.103186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.077 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.077 [2024-07-14 04:33:52.177784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:32.077 [2024-07-14 04:33:52.203241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.335 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.335 [2024-07-14 04:33:52.278671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:32.335 [2024-07-14 04:33:52.301862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.335 [2024-07-14 04:33:52.378161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.335 [2024-07-14 04:33:52.380277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:32.335 [2024-07-14 04:33:52.445511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:32.594 Running I/O for 1 seconds... 00:17:32.594 Running I/O for 1 seconds... 00:17:32.594 Running I/O for 1 seconds... 00:17:32.594 Running I/O for 1 seconds... 00:17:33.531 00:17:33.531 Latency(us) 00:17:33.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.531 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:33.531 Nvme1n1 : 1.00 199975.21 781.15 0.00 0.00 637.82 262.45 873.81 00:17:33.531 =================================================================================================================== 00:17:33.531 Total : 199975.21 781.15 0.00 0.00 637.82 262.45 873.81 00:17:33.531 00:17:33.531 Latency(us) 00:17:33.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.531 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:33.531 Nvme1n1 : 1.02 7108.66 27.77 0.00 0.00 17874.44 4466.16 27185.30 00:17:33.531 =================================================================================================================== 00:17:33.531 Total : 7108.66 27.77 0.00 0.00 17874.44 4466.16 27185.30 00:17:33.531 00:17:33.531 Latency(us) 00:17:33.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.531 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:33.531 Nvme1n1 : 1.01 9018.74 35.23 0.00 0.00 14127.44 8398.32 24660.95 00:17:33.531 =================================================================================================================== 00:17:33.531 Total : 9018.74 35.23 0.00 0.00 14127.44 8398.32 24660.95 00:17:33.789 00:17:33.789 Latency(us) 00:17:33.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.789 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:33.789 Nvme1n1 : 1.01 6784.29 26.50 0.00 0.00 18796.63 7184.69 38253.61 00:17:33.789 =================================================================================================================== 00:17:33.789 Total : 6784.29 26.50 0.00 0.00 18796.63 7184.69 38253.61 00:17:33.789 04:33:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2778074 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2778076 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2778079 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.048 rmmod nvme_tcp 00:17:34.048 rmmod nvme_fabrics 00:17:34.048 rmmod nvme_keyring 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2777927 ']' 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2777927 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 2777927 ']' 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 2777927 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2777927 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2777927' 00:17:34.048 killing process with pid 2777927 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 2777927 00:17:34.048 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 2777927 00:17:34.307 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:34.307 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:34.307 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:34.307 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.307 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.307 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.307 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.307 04:33:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.214 04:33:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:36.214 00:17:36.214 real 0m7.037s 00:17:36.214 user 0m15.588s 00:17:36.214 sys 0m3.558s 00:17:36.214 04:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.214 04:33:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:36.214 ************************************ 00:17:36.214 END TEST nvmf_bdev_io_wait 00:17:36.214 ************************************ 00:17:36.214 04:33:56 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:36.214 04:33:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:36.214 04:33:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.214 04:33:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:36.473 ************************************ 00:17:36.473 START TEST nvmf_queue_depth 00:17:36.473 ************************************ 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:36.473 * Looking for test storage... 00:17:36.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:36.473 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:36.474 04:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.375 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:38.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:38.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:38.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:38.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:17:38.376 00:17:38.376 --- 10.0.0.2 ping statistics --- 00:17:38.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.376 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:17:38.376 00:17:38.376 --- 10.0.0.1 ping statistics --- 00:17:38.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.376 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:38.376 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2780266 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2780266 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2780266 ']' 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:38.670 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.670 [2024-07-14 04:33:58.625245] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:38.670 [2024-07-14 04:33:58.625321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.670 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.670 [2024-07-14 04:33:58.686939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.670 [2024-07-14 04:33:58.773706] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.670 [2024-07-14 04:33:58.773766] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.670 [2024-07-14 04:33:58.773794] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.670 [2024-07-14 04:33:58.773806] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.670 [2024-07-14 04:33:58.773815] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.670 [2024-07-14 04:33:58.773841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.929 [2024-07-14 04:33:58.912648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.929 Malloc0 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.929 [2024-07-14 04:33:58.980930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2780321 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2780321 /var/tmp/bdevperf.sock 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2780321 ']' 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:38.929 04:33:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.929 [2024-07-14 04:33:59.027807] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:38.929 [2024-07-14 04:33:59.027894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780321 ] 00:17:38.929 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.929 [2024-07-14 04:33:59.086270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.187 [2024-07-14 04:33:59.173643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.187 04:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:39.187 04:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:39.187 04:33:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:39.187 04:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.187 04:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:39.187 NVMe0n1 00:17:39.187 04:33:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.187 04:33:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:39.446 Running I/O for 10 seconds... 00:17:49.428 00:17:49.428 Latency(us) 00:17:49.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.428 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:49.428 Verification LBA range: start 0x0 length 0x4000 00:17:49.428 NVMe0n1 : 10.09 8587.50 33.54 0.00 0.00 118636.81 24369.68 75730.49 00:17:49.429 =================================================================================================================== 00:17:49.429 Total : 8587.50 33.54 0.00 0.00 118636.81 24369.68 75730.49 00:17:49.429 0 00:17:49.429 04:34:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2780321 00:17:49.429 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2780321 ']' 00:17:49.429 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2780321 00:17:49.429 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:49.429 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:49.429 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2780321 00:17:49.687 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:49.687 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:49.687 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2780321' 00:17:49.687 killing process with pid 2780321 00:17:49.687 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2780321 00:17:49.687 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.687 00:17:49.687 Latency(us) 00:17:49.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.688 =================================================================================================================== 00:17:49.688 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.688 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2780321 00:17:49.688 04:34:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:49.688 04:34:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:49.688 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:49.688 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:49.688 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:49.688 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:49.688 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.688 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:49.688 rmmod nvme_tcp 00:17:49.948 rmmod nvme_fabrics 00:17:49.948 rmmod nvme_keyring 00:17:49.948 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2780266 ']' 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2780266 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2780266 ']' 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2780266 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2780266 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2780266' 00:17:49.949 killing process with pid 2780266 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2780266 00:17:49.949 04:34:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2780266 00:17:50.207 04:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:50.208 04:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:50.208 04:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:50.208 04:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.208 04:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.208 04:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.208 04:34:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.208 04:34:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.114 04:34:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.114 00:17:52.114 real 0m15.846s 00:17:52.114 user 0m22.242s 00:17:52.114 sys 0m3.055s 00:17:52.114 04:34:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:52.114 04:34:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.114 ************************************ 00:17:52.114 END TEST nvmf_queue_depth 00:17:52.114 ************************************ 00:17:52.114 04:34:12 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:52.114 04:34:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:52.114 04:34:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:52.114 04:34:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.373 ************************************ 00:17:52.373 START TEST nvmf_target_multipath 00:17:52.373 ************************************ 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:52.373 * Looking for test storage... 00:17:52.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:52.373 04:34:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:54.277 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:54.277 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:54.277 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:54.277 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.277 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:54.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:17:54.278 00:17:54.278 --- 10.0.0.2 ping statistics --- 00:17:54.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.278 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:17:54.278 00:17:54.278 --- 10.0.0.1 ping statistics --- 00:17:54.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.278 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:54.278 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:54.538 only one NIC for nvmf test 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:54.538 rmmod nvme_tcp 00:17:54.538 rmmod nvme_fabrics 00:17:54.538 rmmod nvme_keyring 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.538 04:34:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:56.446 00:17:56.446 real 0m4.277s 00:17:56.446 user 0m0.782s 00:17:56.446 sys 0m1.472s 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:56.446 04:34:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:56.446 ************************************ 00:17:56.446 END TEST nvmf_target_multipath 00:17:56.446 ************************************ 00:17:56.446 04:34:16 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:56.447 04:34:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:56.447 04:34:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:56.447 04:34:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:56.706 ************************************ 00:17:56.706 START TEST nvmf_zcopy 00:17:56.706 ************************************ 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:56.706 * Looking for test storage... 00:17:56.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.706 04:34:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:56.707 04:34:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:58.612 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:58.612 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:58.612 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:58.612 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.612 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:58.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:17:58.871 00:17:58.871 --- 10.0.0.2 ping statistics --- 00:17:58.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.871 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:17:58.871 00:17:58.871 --- 10.0.0.1 ping statistics --- 00:17:58.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.871 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2785362 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2785362 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 2785362 ']' 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:58.871 04:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.871 [2024-07-14 04:34:18.887950] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:58.871 [2024-07-14 04:34:18.888022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.871 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.871 [2024-07-14 04:34:18.956039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.871 [2024-07-14 04:34:19.046601] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.871 [2024-07-14 04:34:19.046655] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.871 [2024-07-14 04:34:19.046683] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.871 [2024-07-14 04:34:19.046696] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.871 [2024-07-14 04:34:19.046713] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.871 [2024-07-14 04:34:19.046754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.129 [2024-07-14 04:34:19.198890] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.129 [2024-07-14 04:34:19.215108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.129 malloc0 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:59.129 { 00:17:59.129 "params": { 00:17:59.129 "name": "Nvme$subsystem", 00:17:59.129 "trtype": "$TEST_TRANSPORT", 00:17:59.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.129 "adrfam": "ipv4", 00:17:59.129 "trsvcid": "$NVMF_PORT", 00:17:59.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.129 "hdgst": ${hdgst:-false}, 00:17:59.129 "ddgst": ${ddgst:-false} 00:17:59.129 }, 00:17:59.129 "method": "bdev_nvme_attach_controller" 00:17:59.129 } 00:17:59.129 EOF 00:17:59.129 )") 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:59.129 04:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:59.129 "params": { 00:17:59.129 "name": "Nvme1", 00:17:59.129 "trtype": "tcp", 00:17:59.129 "traddr": "10.0.0.2", 00:17:59.129 "adrfam": "ipv4", 00:17:59.129 "trsvcid": "4420", 00:17:59.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.129 "hdgst": false, 00:17:59.129 "ddgst": false 00:17:59.129 }, 00:17:59.129 "method": "bdev_nvme_attach_controller" 00:17:59.129 }' 00:17:59.129 [2024-07-14 04:34:19.297818] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:59.129 [2024-07-14 04:34:19.297932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785504 ] 00:17:59.388 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.388 [2024-07-14 04:34:19.366404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.388 [2024-07-14 04:34:19.460507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.646 Running I/O for 10 seconds... 00:18:09.630 00:18:09.630 Latency(us) 00:18:09.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.630 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:09.630 Verification LBA range: start 0x0 length 0x1000 00:18:09.630 Nvme1n1 : 10.05 5980.30 46.72 0.00 0.00 21266.58 2767.08 43302.31 00:18:09.630 =================================================================================================================== 00:18:09.630 Total : 5980.30 46.72 0.00 0.00 21266.58 2767.08 43302.31 00:18:09.890 04:34:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2786695 00:18:09.890 04:34:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:09.890 04:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:09.890 04:34:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:09.890 04:34:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:09.890 04:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:09.890 04:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:09.890 04:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:09.890 04:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:09.890 { 00:18:09.890 "params": { 00:18:09.890 "name": "Nvme$subsystem", 00:18:09.890 "trtype": "$TEST_TRANSPORT", 00:18:09.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.890 "adrfam": "ipv4", 00:18:09.890 "trsvcid": "$NVMF_PORT", 00:18:09.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.890 "hdgst": ${hdgst:-false}, 00:18:09.890 "ddgst": ${ddgst:-false} 00:18:09.890 }, 00:18:09.890 "method": "bdev_nvme_attach_controller" 00:18:09.890 } 00:18:09.890 EOF 00:18:09.890 )") 00:18:09.890 [2024-07-14 04:34:30.001345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.001393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 04:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:09.890 04:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:09.890 04:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:09.890 04:34:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:09.890 "params": { 00:18:09.890 "name": "Nvme1", 00:18:09.890 "trtype": "tcp", 00:18:09.890 "traddr": "10.0.0.2", 00:18:09.890 "adrfam": "ipv4", 00:18:09.890 "trsvcid": "4420", 00:18:09.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.890 "hdgst": false, 00:18:09.890 "ddgst": false 00:18:09.890 }, 00:18:09.890 "method": "bdev_nvme_attach_controller" 00:18:09.890 }' 00:18:09.890 [2024-07-14 04:34:30.009301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.009335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 [2024-07-14 04:34:30.017355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.017392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 [2024-07-14 04:34:30.025347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.025377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 [2024-07-14 04:34:30.033364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.033390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 [2024-07-14 04:34:30.041385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.041411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 [2024-07-14 04:34:30.044249] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:09.890 [2024-07-14 04:34:30.044329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786695 ] 00:18:09.890 [2024-07-14 04:34:30.049407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.049433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 [2024-07-14 04:34:30.057429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.057454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 [2024-07-14 04:34:30.065450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.065475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.890 [2024-07-14 04:34:30.073471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.073496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.890 [2024-07-14 04:34:30.081495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.890 [2024-07-14 04:34:30.081521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.089515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.089539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.097541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.097565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.105562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.105587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.109204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.150 [2024-07-14 04:34:30.113603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.113635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.121651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.121693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.129641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.129668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.137655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.137681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.145674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.145699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.153695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.153719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.161728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.161756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.169785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.169828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.177767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.177793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.185784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.185809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.193805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.193830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.201830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.201854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.207646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.150 [2024-07-14 04:34:30.209850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.209881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.217878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.150 [2024-07-14 04:34:30.217917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.150 [2024-07-14 04:34:30.225952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.225992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.233974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.234014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.241994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.242037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.250014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.250059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.258027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.258066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.266043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.266087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.274059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.274101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.282043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.282066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.290105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.290179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.298125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.298184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.306126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.306172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.314127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.314165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.322184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.322208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.330215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.330245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.151 [2024-07-14 04:34:30.338234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.151 [2024-07-14 04:34:30.338261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 [2024-07-14 04:34:30.346255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.346283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 [2024-07-14 04:34:30.354354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.354383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 [2024-07-14 04:34:30.362309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.362336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 [2024-07-14 04:34:30.370332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.370358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 [2024-07-14 04:34:30.378351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.378376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 [2024-07-14 04:34:30.386385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.386414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 [2024-07-14 04:34:30.394400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.394426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 Running I/O for 5 seconds... 00:18:10.411 [2024-07-14 04:34:30.402424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.402449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 [2024-07-14 04:34:30.416522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.416555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.411 [2024-07-14 04:34:30.426798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.411 [2024-07-14 04:34:30.426826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.438582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.438612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.450365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.450395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.461711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.461742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.473035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.473062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.484663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.484694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.495986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.496014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.507502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.507532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.519040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.519068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.530258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.530289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.541599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.541629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.553047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.553076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.564952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.564979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.576383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.576413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.587442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.587471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.412 [2024-07-14 04:34:30.598787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.412 [2024-07-14 04:34:30.598816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.609960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.609988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.620626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.620653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.631361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.631390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.642079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.642106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.652521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.652550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.663578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.663607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.674687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.674716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.685694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.685735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.696155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.696181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.706929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.706956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.717784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.717810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.728437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.728465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.739582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.739608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.749842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.749893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.759479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.759507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.771095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.771138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.781966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.781994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.792233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.792260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.802951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.802978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.814037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.814063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.825251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.825279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.836131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.836158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.846902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.846943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.672 [2024-07-14 04:34:30.857697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.672 [2024-07-14 04:34:30.857725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.868627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.868656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.879327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.879356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.889957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.889984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.900920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.900947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.913214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.913240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.922793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.922822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.934351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.934380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.945339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.945368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.956643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.956673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.967643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.967672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.979165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.979191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:30.990138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:30.990179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.001221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.001250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.012191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.012220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.024155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.024181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.033679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.033707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.045102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.045129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.055345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.055373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.066397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.066425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.079426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.079455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.089348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.089377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.101073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.101102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.930 [2024-07-14 04:34:31.112092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.930 [2024-07-14 04:34:31.112118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.188 [2024-07-14 04:34:31.122801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.122829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.133343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.133370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.143833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.143860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.154948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.154975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.165716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.165742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.176745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.176787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.187402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.187428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.198393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.198420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.209111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.209142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.219742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.219768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.230329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.230356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.240803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.240829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.251520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.251547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.262612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.262653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.273191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.273217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.283756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.283791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.296320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.296347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.305583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.305609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.318764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.318790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.328506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.328532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.339615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.339641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.350053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.350080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.360278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.360305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.189 [2024-07-14 04:34:31.370932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.189 [2024-07-14 04:34:31.370958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.447 [2024-07-14 04:34:31.381205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.447 [2024-07-14 04:34:31.381233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.447 [2024-07-14 04:34:31.391630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.447 [2024-07-14 04:34:31.391658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.447 [2024-07-14 04:34:31.402005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.447 [2024-07-14 04:34:31.402033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.447 [2024-07-14 04:34:31.412822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.447 [2024-07-14 04:34:31.412848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.447 [2024-07-14 04:34:31.423551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.447 [2024-07-14 04:34:31.423578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.447 [2024-07-14 04:34:31.434515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.447 [2024-07-14 04:34:31.434556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.447 [2024-07-14 04:34:31.444798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.447 [2024-07-14 04:34:31.444825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.455046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.455073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.465278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.465304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.476357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.476384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.486922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.486956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.497251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.497278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.507353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.507379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.518520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.518546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.529117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.529144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.539465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.539506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.549741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.549768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.560036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.560063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.570082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.570110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.580394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.580421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.591016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.591044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.604034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.604061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.613693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.613720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.624714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.624741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.448 [2024-07-14 04:34:31.636731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.448 [2024-07-14 04:34:31.636757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.645904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.645932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.657271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.657298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.669463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.669489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.678948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.678975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.690128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.690174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.700606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.700633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.710501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.710528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.721190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.721216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.730716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.730741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.741465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.741491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.754081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.754107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.763892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.763919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.775572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.775599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.786223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.786249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.796476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.796503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.806776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.806803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.817215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.817241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.827624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.827649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.837964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.837990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.848608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.848634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.859389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.859415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.871446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.871471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.881132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.881172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.706 [2024-07-14 04:34:31.892533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.706 [2024-07-14 04:34:31.892565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.966 [2024-07-14 04:34:31.904922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.966 [2024-07-14 04:34:31.904949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.966 [2024-07-14 04:34:31.914529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.966 [2024-07-14 04:34:31.914555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.966 [2024-07-14 04:34:31.925694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.966 [2024-07-14 04:34:31.925719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.966 [2024-07-14 04:34:31.936171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.966 [2024-07-14 04:34:31.936197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.966 [2024-07-14 04:34:31.946509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.966 [2024-07-14 04:34:31.946535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.966 [2024-07-14 04:34:31.957386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:31.957412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:31.968208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:31.968234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:31.978682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:31.978709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:31.989448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:31.989474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.001900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.001927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.011390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.011416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.022642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.022684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.033155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.033182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.044016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.044044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.056238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.056265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.065894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.065921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.076331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.076357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.086478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.086504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.096784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.096810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.107181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.107208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.117929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.117956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.129134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.129175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.140082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.140109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.967 [2024-07-14 04:34:32.150935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.967 [2024-07-14 04:34:32.150962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.163590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.163618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.172741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.172769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.184105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.184132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.194613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.194640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.206123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.206150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.216528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.216555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.227120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.227162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.237787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.237813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.248537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.248563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.259021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.259048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.269420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.269446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.282037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.282064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.291479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.291506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.302568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.302595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.312914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.312942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.323372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.323399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.334458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.334484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.345095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.345122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.357224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.357251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.367435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.367462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.378327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.378355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.388737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.388764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.399587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.399614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.228 [2024-07-14 04:34:32.412551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.228 [2024-07-14 04:34:32.412577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.422528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.422555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.433774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.433800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.444304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.444332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.455029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.455056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.465969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.465996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.476615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.476641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.488318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.488345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.497043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.497070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.508711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.508737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.519160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.519185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.529609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.529634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.540661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.540688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.551709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.551735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.562001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.562028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.572739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.572781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.583490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.583515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.594192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.594218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.604943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.604970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.615514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.615540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.625964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.625991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.636842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.636878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.646743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.646768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.658016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.658042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.488 [2024-07-14 04:34:32.668903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.488 [2024-07-14 04:34:32.668937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.679425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.679453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.692472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.692500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.701893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.701931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.712908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.712935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.723188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.723215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.734013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.734040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.744565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.744592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.755208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.755234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.767258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.767285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.776486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.776512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.787770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.787799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.798652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.798679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.809730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.809757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.820744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.820772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.833236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.833263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.842712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.842740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.853637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.853664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.864478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.864505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.875326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.875367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.886457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.886483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.897672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.897699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.907948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.907983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.918987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.919014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.749 [2024-07-14 04:34:32.929874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.749 [2024-07-14 04:34:32.929916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.008 [2024-07-14 04:34:32.942582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.008 [2024-07-14 04:34:32.942610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.008 [2024-07-14 04:34:32.952508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.008 [2024-07-14 04:34:32.952534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.008 [2024-07-14 04:34:32.963616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.008 [2024-07-14 04:34:32.963643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.008 [2024-07-14 04:34:32.973756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.008 [2024-07-14 04:34:32.973781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.008 [2024-07-14 04:34:32.984630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.008 [2024-07-14 04:34:32.984655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.008 [2024-07-14 04:34:32.994990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.008 [2024-07-14 04:34:32.995017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.008 [2024-07-14 04:34:33.005856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.005890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.018308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.018335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.027992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.028019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.039311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.039337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.049726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.049752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.061094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.061121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.071978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.072006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.082664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.082690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.095097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.095123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.104598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.104624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.116264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.116297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.126802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.126829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.137007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.137034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.147769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.147796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.159605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.159630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.168639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.168665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.181591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.181617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.009 [2024-07-14 04:34:33.191211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.009 [2024-07-14 04:34:33.191237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.267 [2024-07-14 04:34:33.202163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.267 [2024-07-14 04:34:33.202190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.267 [2024-07-14 04:34:33.212686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.267 [2024-07-14 04:34:33.212712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.267 [2024-07-14 04:34:33.223264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.267 [2024-07-14 04:34:33.223291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.267 [2024-07-14 04:34:33.233546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.267 [2024-07-14 04:34:33.233571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.244214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.244240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.254512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.254538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.264823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.264864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.275307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.275333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.285943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.285969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.296547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.296573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.307576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.307603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.318008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.318045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.328573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.328613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.340902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.340928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.350106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.350133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.361419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.361445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.373838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.373889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.383771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.383798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.394824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.394851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.404995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.405022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.415833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.415882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.427773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.427800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.437243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.437269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.268 [2024-07-14 04:34:33.448762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.268 [2024-07-14 04:34:33.448790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.459688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.459716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.470633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.470659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.483119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.483173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.492899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.492940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.503582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.503607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.514098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.514125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.525251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.525286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.535790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.535819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.548480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.548506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.558203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.558229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.569425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.569451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.580052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.580079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.590878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.590906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.601818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.601846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.612594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.612620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.623130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.623171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.633683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.633709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.644495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.644521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.656717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.656743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.666311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.666338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.677591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.677617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.688188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.688214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.699102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.699128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.526 [2024-07-14 04:34:33.709766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.526 [2024-07-14 04:34:33.709793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.720221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.720263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.730774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.730801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.741477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.741502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.751997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.752023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.762443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.762470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.772702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.772729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.782998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.783024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.793807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.793833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.806992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.807020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.818348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.818375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.827709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.827736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.839602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.839630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.850503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.850531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.861676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.861703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.872465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.872491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.882935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.882962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.893764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.893789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.904468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.904495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.915051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.915078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.924951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.924977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.935756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.935782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.948506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.948532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.958252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.958279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.785 [2024-07-14 04:34:33.969157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.785 [2024-07-14 04:34:33.969183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:33.979402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:33.979429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:33.989455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:33.989481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.000442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.000468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.011342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.011369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.023905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.023931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.033286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.033313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.043916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.043942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.054520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.054546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.065077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.065104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.075417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.075444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.085763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.085804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.095685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.095712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.106952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.106979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.117720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.117746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.128529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.128557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.140780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.140807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.150596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.150623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.161684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.161710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.172593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.172619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.183214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.183241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.195933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.195960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.205034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.205060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.216161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.216187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.045 [2024-07-14 04:34:34.226448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.045 [2024-07-14 04:34:34.226475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.236739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.236767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.246972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.246998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.257272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.257299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.267991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.268017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.278730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.278756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.289073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.289100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.299649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.299690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.310275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.310317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.323025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.323052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.332261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.332288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.343669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.343696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.354093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.354121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.364606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.364648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.377986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.378013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.387739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.387765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.398626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.398652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.409260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.409286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.419748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.419774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.432398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.432424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.441567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.441593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.452973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.452999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.465490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.465517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.475718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.475745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.305 [2024-07-14 04:34:34.486228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.305 [2024-07-14 04:34:34.486255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.496734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.496762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.508651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.508677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.518509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.518535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.530101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.530128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.542969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.543004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.552439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.552466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.563649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.563677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.574202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.574245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.584954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.584981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.595541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.595583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.606309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.606335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.618785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.618811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.631044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.631071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.640441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.640467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.652324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.652350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.664440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.664466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.564 [2024-07-14 04:34:34.673787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.564 [2024-07-14 04:34:34.673813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.565 [2024-07-14 04:34:34.685106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.565 [2024-07-14 04:34:34.685134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.565 [2024-07-14 04:34:34.697219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.565 [2024-07-14 04:34:34.697246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.565 [2024-07-14 04:34:34.706597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.565 [2024-07-14 04:34:34.706623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.565 [2024-07-14 04:34:34.717923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.565 [2024-07-14 04:34:34.717950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.565 [2024-07-14 04:34:34.730486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.565 [2024-07-14 04:34:34.730513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.565 [2024-07-14 04:34:34.740082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.565 [2024-07-14 04:34:34.740109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.759841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.759892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.770143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.770170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.781688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.781715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.792629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.792657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.803177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.803203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.813728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.813755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.823916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.823943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.834306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.834347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.844626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.844652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.855617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.855644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.866500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.866526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.876602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.876629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.886637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.823 [2024-07-14 04:34:34.886665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.823 [2024-07-14 04:34:34.896953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.896980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:34.907335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.907362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:34.918043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.918070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:34.928575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.928602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:34.939395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.939422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:34.949550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.949577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:34.960334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.960369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:34.971467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.971495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:34.982077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.982105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:34.992702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:34.992729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:35.003487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:35.003514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.824 [2024-07-14 04:34:35.014769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.824 [2024-07-14 04:34:35.014796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.025785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.025813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.036826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.036856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.047997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.048025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.059088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.059115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.069435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.069462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.080031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.080058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.090265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.090292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.100524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.100550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.111086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.111113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.121457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.121483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.132620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.132647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.143450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.143477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.154122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.154149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.164586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.164621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.177428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.177455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.084 [2024-07-14 04:34:35.186809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.084 [2024-07-14 04:34:35.186836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.085 [2024-07-14 04:34:35.198005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.085 [2024-07-14 04:34:35.198033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.085 [2024-07-14 04:34:35.208576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.085 [2024-07-14 04:34:35.208603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.085 [2024-07-14 04:34:35.219621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.085 [2024-07-14 04:34:35.219648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.085 [2024-07-14 04:34:35.230434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.085 [2024-07-14 04:34:35.230461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.085 [2024-07-14 04:34:35.241051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.085 [2024-07-14 04:34:35.241077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.085 [2024-07-14 04:34:35.253302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.085 [2024-07-14 04:34:35.253329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.085 [2024-07-14 04:34:35.263263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.085 [2024-07-14 04:34:35.263290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.085 [2024-07-14 04:34:35.274074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.085 [2024-07-14 04:34:35.274100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.286255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.286282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.295638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.295664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.307080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.307107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.317891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.317923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.328475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.328502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.338899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.338926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.349448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.349476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.360057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.360084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.370470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.370497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.380189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.380215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.391319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.391347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.401735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.401761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.412099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.412126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.418056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.418081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 00:18:15.346 Latency(us) 00:18:15.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.346 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:15.346 Nvme1n1 : 5.01 11903.88 93.00 0.00 0.00 10739.77 4563.25 25826.04 00:18:15.346 =================================================================================================================== 00:18:15.346 Total : 11903.88 93.00 0.00 0.00 10739.77 4563.25 25826.04 00:18:15.346 [2024-07-14 04:34:35.426073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.426096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.346 [2024-07-14 04:34:35.434095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.346 [2024-07-14 04:34:35.434120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.442189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.442242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.450214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.450264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.458237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.458291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.466251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.466305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.474266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.474317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.482307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.482361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.490314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.490359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.498338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.498391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.506353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.506403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.514394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.514450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.522414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.522469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.347 [2024-07-14 04:34:35.530421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.347 [2024-07-14 04:34:35.530471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.606 [2024-07-14 04:34:35.538442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.606 [2024-07-14 04:34:35.538493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.546463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.546514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.554478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.554530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.562490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.562532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.570479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.570505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.578545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.578593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.586563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.586614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.594597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.594651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.602575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.602603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.610592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.610621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.618659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.618711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.626676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.626725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.634654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.634679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.642673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.642696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 [2024-07-14 04:34:35.650696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.607 [2024-07-14 04:34:35.650720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2786695) - No such process 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2786695 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.607 delay0 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.607 04:34:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:15.607 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.867 [2024-07-14 04:34:35.812061] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:22.446 Initializing NVMe Controllers 00:18:22.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:22.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:22.446 Initialization complete. Launching workers. 00:18:22.446 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 118 00:18:22.446 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 402, failed to submit 36 00:18:22.446 success 245, unsuccess 157, failed 0 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.446 rmmod nvme_tcp 00:18:22.446 rmmod nvme_fabrics 00:18:22.446 rmmod nvme_keyring 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2785362 ']' 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2785362 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 2785362 ']' 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 2785362 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:22.446 04:34:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2785362 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2785362' 00:18:22.446 killing process with pid 2785362 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 2785362 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 2785362 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.446 04:34:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.354 04:34:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:24.354 00:18:24.354 real 0m27.635s 00:18:24.354 user 0m40.627s 00:18:24.354 sys 0m8.431s 00:18:24.354 04:34:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:24.354 04:34:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.354 ************************************ 00:18:24.354 END TEST nvmf_zcopy 00:18:24.354 ************************************ 00:18:24.354 04:34:44 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:24.354 04:34:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:24.354 04:34:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:24.354 04:34:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:24.354 ************************************ 00:18:24.354 START TEST nvmf_nmic 00:18:24.354 ************************************ 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:24.354 * Looking for test storage... 00:18:24.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.354 04:34:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.355 04:34:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.355 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:24.355 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:24.355 04:34:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:24.355 04:34:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.260 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:26.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:26.261 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:26.261 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:26.261 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:26.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:18:26.261 00:18:26.261 --- 10.0.0.2 ping statistics --- 00:18:26.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.261 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:18:26.261 00:18:26.261 --- 10.0.0.1 ping statistics --- 00:18:26.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.261 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2789958 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2789958 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 2789958 ']' 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:26.261 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.520 [2024-07-14 04:34:46.469541] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:26.520 [2024-07-14 04:34:46.469625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.520 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.520 [2024-07-14 04:34:46.537984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.520 [2024-07-14 04:34:46.628245] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.520 [2024-07-14 04:34:46.628308] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.520 [2024-07-14 04:34:46.628336] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.520 [2024-07-14 04:34:46.628347] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.520 [2024-07-14 04:34:46.628356] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.520 [2024-07-14 04:34:46.628449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.520 [2024-07-14 04:34:46.628515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.520 [2024-07-14 04:34:46.628567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.520 [2024-07-14 04:34:46.628569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 [2024-07-14 04:34:46.779424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 Malloc0 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 [2024-07-14 04:34:46.830562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:26.777 test case1: single bdev can't be used in multiple subsystems 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 [2024-07-14 04:34:46.854473] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:26.777 [2024-07-14 04:34:46.854501] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:26.777 [2024-07-14 04:34:46.854531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.777 request: 00:18:26.777 { 00:18:26.777 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:26.777 "namespace": { 00:18:26.777 "bdev_name": "Malloc0", 00:18:26.777 "no_auto_visible": false 00:18:26.777 }, 00:18:26.777 "method": "nvmf_subsystem_add_ns", 00:18:26.777 "req_id": 1 00:18:26.777 } 00:18:26.777 Got JSON-RPC error response 00:18:26.777 response: 00:18:26.777 { 00:18:26.777 "code": -32602, 00:18:26.777 "message": "Invalid parameters" 00:18:26.777 } 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:26.777 Adding namespace failed - expected result. 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:26.777 test case2: host connect to nvmf target in multiple paths 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.777 [2024-07-14 04:34:46.862578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.777 04:34:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:27.371 04:34:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:27.941 04:34:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:27.941 04:34:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:27.941 04:34:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.941 04:34:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:27.941 04:34:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:30.473 04:34:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:30.473 04:34:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:30.473 04:34:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:30.473 04:34:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:30.473 04:34:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.473 04:34:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:30.473 04:34:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:30.473 [global] 00:18:30.473 thread=1 00:18:30.473 invalidate=1 00:18:30.473 rw=write 00:18:30.473 time_based=1 00:18:30.473 runtime=1 00:18:30.473 ioengine=libaio 00:18:30.473 direct=1 00:18:30.473 bs=4096 00:18:30.473 iodepth=1 00:18:30.473 norandommap=0 00:18:30.473 numjobs=1 00:18:30.473 00:18:30.473 verify_dump=1 00:18:30.473 verify_backlog=512 00:18:30.473 verify_state_save=0 00:18:30.473 do_verify=1 00:18:30.473 verify=crc32c-intel 00:18:30.473 [job0] 00:18:30.473 filename=/dev/nvme0n1 00:18:30.473 Could not set queue depth (nvme0n1) 00:18:30.473 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:30.473 fio-3.35 00:18:30.473 Starting 1 thread 00:18:31.411 00:18:31.411 job0: (groupid=0, jobs=1): err= 0: pid=2790576: Sun Jul 14 04:34:51 2024 00:18:31.411 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:31.411 slat (nsec): min=8178, max=71796, avg=20117.90, stdev=9073.48 00:18:31.411 clat (usec): min=324, max=720, avg=452.86, stdev=74.22 00:18:31.411 lat (usec): min=340, max=757, avg=472.98, stdev=78.28 00:18:31.411 clat percentiles (usec): 00:18:31.411 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 388], 00:18:31.411 | 30.00th=[ 416], 40.00th=[ 429], 50.00th=[ 441], 60.00th=[ 453], 00:18:31.411 | 70.00th=[ 469], 80.00th=[ 506], 90.00th=[ 570], 95.00th=[ 603], 00:18:31.411 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 701], 99.95th=[ 717], 00:18:31.411 | 99.99th=[ 717] 00:18:31.411 write: IOPS=1138, BW=4555KiB/s (4665kB/s)(4560KiB/1001msec); 0 zone resets 00:18:31.411 slat (usec): min=9, max=41573, avg=88.54, stdev=1496.42 00:18:31.411 clat (usec): min=208, max=655, avg=351.40, stdev=70.61 00:18:31.411 lat (usec): min=224, max=42025, avg=439.94, stdev=1502.53 00:18:31.411 clat percentiles (usec): 00:18:31.411 | 1.00th=[ 227], 5.00th=[ 247], 10.00th=[ 258], 20.00th=[ 281], 00:18:31.411 | 30.00th=[ 302], 40.00th=[ 322], 50.00th=[ 351], 60.00th=[ 375], 00:18:31.411 | 70.00th=[ 400], 80.00th=[ 420], 90.00th=[ 441], 95.00th=[ 457], 00:18:31.411 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 627], 99.95th=[ 660], 00:18:31.411 | 99.99th=[ 660] 00:18:31.411 bw ( KiB/s): min= 4096, max= 4096, per=89.91%, avg=4096.00, stdev= 0.00, samples=1 00:18:31.411 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:31.411 lat (usec) : 250=3.28%, 500=86.09%, 750=10.63% 00:18:31.411 cpu : usr=2.50%, sys=8.00%, ctx=2168, majf=0, minf=2 00:18:31.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:31.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.411 issued rwts: total=1024,1140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:31.411 00:18:31.411 Run status group 0 (all jobs): 00:18:31.411 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:18:31.411 WRITE: bw=4555KiB/s (4665kB/s), 4555KiB/s-4555KiB/s (4665kB/s-4665kB/s), io=4560KiB (4669kB), run=1001-1001msec 00:18:31.411 00:18:31.411 Disk stats (read/write): 00:18:31.411 nvme0n1: ios=940/1024, merge=0/0, ticks=1342/319, in_queue=1661, util=99.70% 00:18:31.411 04:34:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:31.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:31.411 04:34:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:31.411 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.671 rmmod nvme_tcp 00:18:31.671 rmmod nvme_fabrics 00:18:31.671 rmmod nvme_keyring 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2789958 ']' 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2789958 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 2789958 ']' 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 2789958 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2789958 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2789958' 00:18:31.671 killing process with pid 2789958 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 2789958 00:18:31.671 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 2789958 00:18:31.930 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.930 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.930 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.930 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.930 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.930 04:34:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.930 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.930 04:34:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.833 04:34:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:33.833 00:18:33.834 real 0m9.686s 00:18:33.834 user 0m22.059s 00:18:33.834 sys 0m2.286s 00:18:33.834 04:34:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:33.834 04:34:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.834 ************************************ 00:18:33.834 END TEST nvmf_nmic 00:18:33.834 ************************************ 00:18:34.092 04:34:54 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:34.092 04:34:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:34.092 04:34:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:34.092 04:34:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:34.092 ************************************ 00:18:34.092 START TEST nvmf_fio_target 00:18:34.092 ************************************ 00:18:34.092 04:34:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:34.092 * Looking for test storage... 00:18:34.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.092 04:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.092 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:34.093 04:34:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:35.996 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:35.997 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:35.997 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:35.997 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:35.997 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:35.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:18:35.997 00:18:35.997 --- 10.0.0.2 ping statistics --- 00:18:35.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.997 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:18:35.997 00:18:35.997 --- 10.0.0.1 ping statistics --- 00:18:35.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.997 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.997 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2792645 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2792645 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 2792645 ']' 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:36.258 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.258 [2024-07-14 04:34:56.242884] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:36.258 [2024-07-14 04:34:56.242972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.258 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.258 [2024-07-14 04:34:56.311924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.258 [2024-07-14 04:34:56.407179] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.258 [2024-07-14 04:34:56.407245] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.258 [2024-07-14 04:34:56.407262] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.258 [2024-07-14 04:34:56.407275] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.258 [2024-07-14 04:34:56.407286] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.258 [2024-07-14 04:34:56.407367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.258 [2024-07-14 04:34:56.407682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.258 [2024-07-14 04:34:56.407733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.258 [2024-07-14 04:34:56.407736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.517 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:36.517 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:36.517 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.517 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.518 04:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.518 04:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.518 04:34:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:36.776 [2024-07-14 04:34:56.812662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.776 04:34:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.034 04:34:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:37.034 04:34:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.292 04:34:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:37.292 04:34:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.551 04:34:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:37.551 04:34:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.809 04:34:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:37.809 04:34:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:38.067 04:34:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.325 04:34:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:38.325 04:34:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.583 04:34:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:38.583 04:34:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.842 04:34:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:38.842 04:34:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:39.100 04:34:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:39.358 04:34:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:39.358 04:34:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.615 04:34:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:39.615 04:34:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:39.872 04:34:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.130 [2024-07-14 04:35:00.126074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.130 04:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:40.387 04:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:40.646 04:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.212 04:35:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:41.212 04:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:41.212 04:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.212 04:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:41.212 04:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:41.212 04:35:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:43.739 04:35:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:43.739 04:35:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:43.739 04:35:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:43.739 04:35:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:43.739 04:35:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.739 04:35:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:43.739 04:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:43.739 [global] 00:18:43.739 thread=1 00:18:43.739 invalidate=1 00:18:43.739 rw=write 00:18:43.739 time_based=1 00:18:43.739 runtime=1 00:18:43.739 ioengine=libaio 00:18:43.739 direct=1 00:18:43.739 bs=4096 00:18:43.739 iodepth=1 00:18:43.739 norandommap=0 00:18:43.739 numjobs=1 00:18:43.739 00:18:43.739 verify_dump=1 00:18:43.739 verify_backlog=512 00:18:43.739 verify_state_save=0 00:18:43.739 do_verify=1 00:18:43.739 verify=crc32c-intel 00:18:43.739 [job0] 00:18:43.739 filename=/dev/nvme0n1 00:18:43.739 [job1] 00:18:43.739 filename=/dev/nvme0n2 00:18:43.739 [job2] 00:18:43.739 filename=/dev/nvme0n3 00:18:43.739 [job3] 00:18:43.739 filename=/dev/nvme0n4 00:18:43.739 Could not set queue depth (nvme0n1) 00:18:43.739 Could not set queue depth (nvme0n2) 00:18:43.739 Could not set queue depth (nvme0n3) 00:18:43.739 Could not set queue depth (nvme0n4) 00:18:43.739 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.739 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.739 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.739 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.739 fio-3.35 00:18:43.739 Starting 4 threads 00:18:44.675 00:18:44.675 job0: (groupid=0, jobs=1): err= 0: pid=2793725: Sun Jul 14 04:35:04 2024 00:18:44.675 read: IOPS=1245, BW=4983KiB/s (5103kB/s)(4988KiB/1001msec) 00:18:44.675 slat (nsec): min=5660, max=52936, avg=15533.72, stdev=6898.78 00:18:44.675 clat (usec): min=323, max=785, avg=412.05, stdev=63.81 00:18:44.675 lat (usec): min=329, max=812, avg=427.59, stdev=66.75 00:18:44.675 clat percentiles (usec): 00:18:44.675 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 363], 00:18:44.675 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 400], 00:18:44.675 | 70.00th=[ 420], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 553], 00:18:44.675 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 627], 99.95th=[ 783], 00:18:44.675 | 99.99th=[ 783] 00:18:44.675 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:44.675 slat (nsec): min=7611, max=74458, avg=19094.67, stdev=8416.47 00:18:44.675 clat (usec): min=205, max=1125, avg=275.70, stdev=51.27 00:18:44.675 lat (usec): min=213, max=1138, avg=294.80, stdev=53.44 00:18:44.675 clat percentiles (usec): 00:18:44.675 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 237], 00:18:44.675 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:18:44.675 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 355], 00:18:44.675 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 750], 99.95th=[ 1123], 00:18:44.675 | 99.99th=[ 1123] 00:18:44.675 bw ( KiB/s): min= 7432, max= 7432, per=46.50%, avg=7432.00, stdev= 0.00, samples=1 00:18:44.675 iops : min= 1858, max= 1858, avg=1858.00, stdev= 0.00, samples=1 00:18:44.675 lat (usec) : 250=15.95%, 500=78.58%, 750=5.35%, 1000=0.07% 00:18:44.675 lat (msec) : 2=0.04% 00:18:44.675 cpu : usr=4.00%, sys=6.40%, ctx=2784, majf=0, minf=1 00:18:44.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.675 issued rwts: total=1247,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.675 job1: (groupid=0, jobs=1): err= 0: pid=2793726: Sun Jul 14 04:35:04 2024 00:18:44.675 read: IOPS=20, BW=82.0KiB/s (83.9kB/s)(84.0KiB/1025msec) 00:18:44.675 slat (nsec): min=12736, max=36412, avg=26404.52, stdev=9631.60 00:18:44.675 clat (usec): min=40895, max=41439, avg=40986.60, stdev=108.52 00:18:44.675 lat (usec): min=40929, max=41458, avg=41013.00, stdev=106.37 00:18:44.675 clat percentiles (usec): 00:18:44.675 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:44.675 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:44.675 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:44.675 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:44.675 | 99.99th=[41681] 00:18:44.675 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:18:44.675 slat (nsec): min=7619, max=73850, avg=21323.55, stdev=10867.78 00:18:44.675 clat (usec): min=221, max=459, avg=292.60, stdev=42.95 00:18:44.675 lat (usec): min=240, max=498, avg=313.93, stdev=41.29 00:18:44.675 clat percentiles (usec): 00:18:44.675 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 262], 00:18:44.675 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:18:44.675 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 359], 95.00th=[ 388], 00:18:44.675 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 461], 99.95th=[ 461], 00:18:44.675 | 99.99th=[ 461] 00:18:44.675 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.675 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.675 lat (usec) : 250=8.26%, 500=87.80% 00:18:44.675 lat (msec) : 50=3.94% 00:18:44.675 cpu : usr=0.59%, sys=1.56%, ctx=534, majf=0, minf=2 00:18:44.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.675 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.675 job2: (groupid=0, jobs=1): err= 0: pid=2793729: Sun Jul 14 04:35:04 2024 00:18:44.675 read: IOPS=489, BW=1958KiB/s (2005kB/s)(1960KiB/1001msec) 00:18:44.675 slat (nsec): min=6428, max=54871, avg=15410.36, stdev=9527.77 00:18:44.675 clat (usec): min=341, max=42027, avg=1712.71, stdev=6927.91 00:18:44.675 lat (usec): min=349, max=42060, avg=1728.12, stdev=6930.36 00:18:44.675 clat percentiles (usec): 00:18:44.675 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 429], 00:18:44.675 | 30.00th=[ 449], 40.00th=[ 474], 50.00th=[ 502], 60.00th=[ 537], 00:18:44.675 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 635], 95.00th=[ 725], 00:18:44.675 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:44.675 | 99.99th=[42206] 00:18:44.675 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:44.675 slat (nsec): min=7496, max=48290, avg=17990.98, stdev=6988.55 00:18:44.675 clat (usec): min=222, max=467, avg=273.30, stdev=24.71 00:18:44.675 lat (usec): min=235, max=488, avg=291.29, stdev=25.76 00:18:44.675 clat percentiles (usec): 00:18:44.675 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 258], 00:18:44.675 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:18:44.675 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:18:44.675 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 469], 99.95th=[ 469], 00:18:44.675 | 99.99th=[ 469] 00:18:44.675 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.675 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.675 lat (usec) : 250=6.19%, 500=68.96%, 750=22.55%, 1000=0.70% 00:18:44.675 lat (msec) : 4=0.10%, 20=0.10%, 50=1.40% 00:18:44.675 cpu : usr=1.10%, sys=1.60%, ctx=1004, majf=0, minf=1 00:18:44.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.675 issued rwts: total=490,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.675 job3: (groupid=0, jobs=1): err= 0: pid=2793730: Sun Jul 14 04:35:04 2024 00:18:44.675 read: IOPS=1141, BW=4567KiB/s (4677kB/s)(4572KiB/1001msec) 00:18:44.675 slat (nsec): min=4915, max=58568, avg=27800.65, stdev=10236.90 00:18:44.675 clat (usec): min=330, max=758, avg=447.39, stdev=48.17 00:18:44.675 lat (usec): min=348, max=772, avg=475.19, stdev=49.34 00:18:44.675 clat percentiles (usec): 00:18:44.675 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 396], 20.00th=[ 412], 00:18:44.675 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 453], 60.00th=[ 461], 00:18:44.675 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 498], 95.00th=[ 523], 00:18:44.675 | 99.00th=[ 586], 99.50th=[ 627], 99.90th=[ 709], 99.95th=[ 758], 00:18:44.675 | 99.99th=[ 758] 00:18:44.675 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:44.675 slat (nsec): min=6458, max=79872, avg=17669.37, stdev=8689.00 00:18:44.675 clat (usec): min=194, max=1420, avg=269.34, stdev=64.90 00:18:44.675 lat (usec): min=201, max=1435, avg=287.01, stdev=68.24 00:18:44.675 clat percentiles (usec): 00:18:44.675 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:18:44.676 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 273], 00:18:44.676 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 351], 95.00th=[ 375], 00:18:44.676 | 99.00th=[ 408], 99.50th=[ 449], 99.90th=[ 1106], 99.95th=[ 1418], 00:18:44.676 | 99.99th=[ 1418] 00:18:44.676 bw ( KiB/s): min= 6960, max= 6960, per=43.54%, avg=6960.00, stdev= 0.00, samples=1 00:18:44.676 iops : min= 1740, max= 1740, avg=1740.00, stdev= 0.00, samples=1 00:18:44.676 lat (usec) : 250=30.68%, 500=65.47%, 750=3.70%, 1000=0.07% 00:18:44.676 lat (msec) : 2=0.07% 00:18:44.676 cpu : usr=3.10%, sys=6.20%, ctx=2681, majf=0, minf=1 00:18:44.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.676 issued rwts: total=1143,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.676 00:18:44.676 Run status group 0 (all jobs): 00:18:44.676 READ: bw=11.1MiB/s (11.6MB/s), 82.0KiB/s-4983KiB/s (83.9kB/s-5103kB/s), io=11.3MiB (11.9MB), run=1001-1025msec 00:18:44.676 WRITE: bw=15.6MiB/s (16.4MB/s), 1998KiB/s-6138KiB/s (2046kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1025msec 00:18:44.676 00:18:44.676 Disk stats (read/write): 00:18:44.676 nvme0n1: ios=1074/1270, merge=0/0, ticks=445/325, in_queue=770, util=86.47% 00:18:44.676 nvme0n2: ios=36/512, merge=0/0, ticks=682/144, in_queue=826, util=86.65% 00:18:44.676 nvme0n3: ios=207/512, merge=0/0, ticks=1679/139, in_queue=1818, util=97.58% 00:18:44.676 nvme0n4: ios=1047/1168, merge=0/0, ticks=1344/309, in_queue=1653, util=97.77% 00:18:44.676 04:35:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:44.676 [global] 00:18:44.676 thread=1 00:18:44.676 invalidate=1 00:18:44.676 rw=randwrite 00:18:44.676 time_based=1 00:18:44.676 runtime=1 00:18:44.676 ioengine=libaio 00:18:44.676 direct=1 00:18:44.676 bs=4096 00:18:44.676 iodepth=1 00:18:44.676 norandommap=0 00:18:44.676 numjobs=1 00:18:44.676 00:18:44.676 verify_dump=1 00:18:44.676 verify_backlog=512 00:18:44.676 verify_state_save=0 00:18:44.676 do_verify=1 00:18:44.676 verify=crc32c-intel 00:18:44.676 [job0] 00:18:44.676 filename=/dev/nvme0n1 00:18:44.676 [job1] 00:18:44.676 filename=/dev/nvme0n2 00:18:44.676 [job2] 00:18:44.676 filename=/dev/nvme0n3 00:18:44.676 [job3] 00:18:44.676 filename=/dev/nvme0n4 00:18:44.676 Could not set queue depth (nvme0n1) 00:18:44.676 Could not set queue depth (nvme0n2) 00:18:44.676 Could not set queue depth (nvme0n3) 00:18:44.676 Could not set queue depth (nvme0n4) 00:18:44.958 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.958 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.958 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.958 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.958 fio-3.35 00:18:44.958 Starting 4 threads 00:18:46.345 00:18:46.345 job0: (groupid=0, jobs=1): err= 0: pid=2793953: Sun Jul 14 04:35:06 2024 00:18:46.345 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:46.345 slat (nsec): min=6182, max=58856, avg=16559.79, stdev=7754.94 00:18:46.345 clat (usec): min=329, max=809, avg=474.31, stdev=77.58 00:18:46.345 lat (usec): min=336, max=829, avg=490.87, stdev=78.57 00:18:46.345 clat percentiles (usec): 00:18:46.345 | 1.00th=[ 343], 5.00th=[ 363], 10.00th=[ 379], 20.00th=[ 392], 00:18:46.345 | 30.00th=[ 416], 40.00th=[ 457], 50.00th=[ 482], 60.00th=[ 494], 00:18:46.345 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 611], 00:18:46.345 | 99.00th=[ 652], 99.50th=[ 685], 99.90th=[ 758], 99.95th=[ 807], 00:18:46.345 | 99.99th=[ 807] 00:18:46.345 write: IOPS=1484, BW=5938KiB/s (6081kB/s)(5944KiB/1001msec); 0 zone resets 00:18:46.345 slat (nsec): min=7784, max=70625, avg=21827.72, stdev=10884.08 00:18:46.345 clat (usec): min=200, max=852, avg=302.57, stdev=68.26 00:18:46.345 lat (usec): min=211, max=877, avg=324.40, stdev=74.77 00:18:46.345 clat percentiles (usec): 00:18:46.345 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 239], 00:18:46.345 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 314], 00:18:46.345 | 70.00th=[ 334], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 424], 00:18:46.345 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 586], 99.95th=[ 857], 00:18:46.345 | 99.99th=[ 857] 00:18:46.345 bw ( KiB/s): min= 6283, max= 6283, per=36.46%, avg=6283.00, stdev= 0.00, samples=1 00:18:46.345 iops : min= 1570, max= 1570, avg=1570.00, stdev= 0.00, samples=1 00:18:46.345 lat (usec) : 250=15.62%, 500=69.40%, 750=14.86%, 1000=0.12% 00:18:46.345 cpu : usr=4.30%, sys=5.90%, ctx=2512, majf=0, minf=1 00:18:46.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.346 issued rwts: total=1024,1486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:46.346 job1: (groupid=0, jobs=1): err= 0: pid=2793954: Sun Jul 14 04:35:06 2024 00:18:46.346 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:46.346 slat (nsec): min=6289, max=70934, avg=25726.70, stdev=10656.48 00:18:46.346 clat (usec): min=330, max=41078, avg=557.86, stdev=1794.08 00:18:46.346 lat (usec): min=340, max=41091, avg=583.58, stdev=1793.66 00:18:46.346 clat percentiles (usec): 00:18:46.346 | 1.00th=[ 338], 5.00th=[ 375], 10.00th=[ 408], 20.00th=[ 433], 00:18:46.346 | 30.00th=[ 449], 40.00th=[ 465], 50.00th=[ 478], 60.00th=[ 490], 00:18:46.346 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 570], 00:18:46.346 | 99.00th=[ 644], 99.50th=[ 709], 99.90th=[41157], 99.95th=[41157], 00:18:46.346 | 99.99th=[41157] 00:18:46.346 write: IOPS=1452, BW=5810KiB/s (5950kB/s)(5816KiB/1001msec); 0 zone resets 00:18:46.346 slat (nsec): min=6256, max=73185, avg=16373.77, stdev=7952.20 00:18:46.346 clat (usec): min=197, max=553, avg=249.86, stdev=52.27 00:18:46.346 lat (usec): min=206, max=592, avg=266.24, stdev=53.33 00:18:46.346 clat percentiles (usec): 00:18:46.346 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 219], 00:18:46.346 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:18:46.346 | 70.00th=[ 247], 80.00th=[ 269], 90.00th=[ 330], 95.00th=[ 379], 00:18:46.346 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 553], 99.95th=[ 553], 00:18:46.346 | 99.99th=[ 553] 00:18:46.346 bw ( KiB/s): min= 6464, max= 6464, per=37.51%, avg=6464.00, stdev= 0.00, samples=1 00:18:46.346 iops : min= 1616, max= 1616, avg=1616.00, stdev= 0.00, samples=1 00:18:46.346 lat (usec) : 250=42.66%, 500=43.46%, 750=13.68%, 1000=0.08% 00:18:46.346 lat (msec) : 2=0.04%, 50=0.08% 00:18:46.346 cpu : usr=2.30%, sys=5.60%, ctx=2479, majf=0, minf=1 00:18:46.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.346 issued rwts: total=1024,1454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:46.346 job2: (groupid=0, jobs=1): err= 0: pid=2793955: Sun Jul 14 04:35:06 2024 00:18:46.346 read: IOPS=20, BW=80.8KiB/s (82.8kB/s)(84.0KiB/1039msec) 00:18:46.346 slat (nsec): min=13566, max=34411, avg=23307.81, stdev=8815.88 00:18:46.346 clat (usec): min=40662, max=41051, avg=40957.69, stdev=82.46 00:18:46.346 lat (usec): min=40696, max=41085, avg=40980.99, stdev=79.30 00:18:46.346 clat percentiles (usec): 00:18:46.346 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:46.346 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:46.346 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:46.346 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:46.346 | 99.99th=[41157] 00:18:46.346 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:18:46.346 slat (nsec): min=6273, max=73061, avg=18613.00, stdev=9933.10 00:18:46.346 clat (usec): min=225, max=893, avg=323.03, stdev=77.50 00:18:46.346 lat (usec): min=238, max=902, avg=341.64, stdev=78.51 00:18:46.346 clat percentiles (usec): 00:18:46.346 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:18:46.346 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 314], 00:18:46.346 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 429], 95.00th=[ 461], 00:18:46.346 | 99.00th=[ 537], 99.50th=[ 644], 99.90th=[ 898], 99.95th=[ 898], 00:18:46.346 | 99.99th=[ 898] 00:18:46.346 bw ( KiB/s): min= 4087, max= 4087, per=23.72%, avg=4087.00, stdev= 0.00, samples=1 00:18:46.346 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:46.346 lat (usec) : 250=5.82%, 500=88.37%, 750=1.69%, 1000=0.19% 00:18:46.346 lat (msec) : 50=3.94% 00:18:46.346 cpu : usr=0.48%, sys=0.87%, ctx=533, majf=0, minf=1 00:18:46.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.346 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:46.346 job3: (groupid=0, jobs=1): err= 0: pid=2793956: Sun Jul 14 04:35:06 2024 00:18:46.346 read: IOPS=979, BW=3916KiB/s (4010kB/s)(3920KiB/1001msec) 00:18:46.346 slat (nsec): min=6190, max=57397, avg=17979.56, stdev=6962.01 00:18:46.346 clat (usec): min=337, max=41163, avg=693.03, stdev=3166.08 00:18:46.346 lat (usec): min=343, max=41183, avg=711.01, stdev=3166.42 00:18:46.346 clat percentiles (usec): 00:18:46.346 | 1.00th=[ 347], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 388], 00:18:46.346 | 30.00th=[ 396], 40.00th=[ 408], 50.00th=[ 429], 60.00th=[ 461], 00:18:46.346 | 70.00th=[ 478], 80.00th=[ 498], 90.00th=[ 529], 95.00th=[ 570], 00:18:46.346 | 99.00th=[ 717], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:46.346 | 99.99th=[41157] 00:18:46.346 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:46.346 slat (nsec): min=6369, max=55659, avg=16055.51, stdev=8389.11 00:18:46.346 clat (usec): min=205, max=515, avg=269.57, stdev=43.59 00:18:46.346 lat (usec): min=214, max=541, avg=285.62, stdev=46.10 00:18:46.346 clat percentiles (usec): 00:18:46.346 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:18:46.346 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:18:46.346 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 310], 95.00th=[ 371], 00:18:46.346 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 494], 99.95th=[ 515], 00:18:46.346 | 99.99th=[ 515] 00:18:46.346 bw ( KiB/s): min= 4096, max= 4096, per=23.77%, avg=4096.00, stdev= 0.00, samples=1 00:18:46.346 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:46.346 lat (usec) : 250=14.92%, 500=75.60%, 750=9.13%, 1000=0.05% 00:18:46.346 lat (msec) : 50=0.30% 00:18:46.346 cpu : usr=2.30%, sys=4.50%, ctx=2005, majf=0, minf=2 00:18:46.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.346 issued rwts: total=980,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:46.346 00:18:46.346 Run status group 0 (all jobs): 00:18:46.346 READ: bw=11.5MiB/s (12.0MB/s), 80.8KiB/s-4092KiB/s (82.8kB/s-4190kB/s), io=11.9MiB (12.5MB), run=1001-1039msec 00:18:46.346 WRITE: bw=16.8MiB/s (17.6MB/s), 1971KiB/s-5938KiB/s (2018kB/s-6081kB/s), io=17.5MiB (18.3MB), run=1001-1039msec 00:18:46.346 00:18:46.346 Disk stats (read/write): 00:18:46.346 nvme0n1: ios=1064/1108, merge=0/0, ticks=934/301, in_queue=1235, util=96.49% 00:18:46.346 nvme0n2: ios=990/1024, merge=0/0, ticks=844/251, in_queue=1095, util=97.66% 00:18:46.346 nvme0n3: ios=22/512, merge=0/0, ticks=908/153, in_queue=1061, util=91.41% 00:18:46.346 nvme0n4: ios=664/1024, merge=0/0, ticks=1453/263, in_queue=1716, util=97.78% 00:18:46.346 04:35:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:46.346 [global] 00:18:46.346 thread=1 00:18:46.346 invalidate=1 00:18:46.346 rw=write 00:18:46.346 time_based=1 00:18:46.346 runtime=1 00:18:46.346 ioengine=libaio 00:18:46.346 direct=1 00:18:46.346 bs=4096 00:18:46.346 iodepth=128 00:18:46.346 norandommap=0 00:18:46.346 numjobs=1 00:18:46.346 00:18:46.346 verify_dump=1 00:18:46.346 verify_backlog=512 00:18:46.346 verify_state_save=0 00:18:46.346 do_verify=1 00:18:46.346 verify=crc32c-intel 00:18:46.346 [job0] 00:18:46.346 filename=/dev/nvme0n1 00:18:46.346 [job1] 00:18:46.346 filename=/dev/nvme0n2 00:18:46.346 [job2] 00:18:46.346 filename=/dev/nvme0n3 00:18:46.346 [job3] 00:18:46.346 filename=/dev/nvme0n4 00:18:46.346 Could not set queue depth (nvme0n1) 00:18:46.346 Could not set queue depth (nvme0n2) 00:18:46.346 Could not set queue depth (nvme0n3) 00:18:46.346 Could not set queue depth (nvme0n4) 00:18:46.346 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.346 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.346 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.346 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.346 fio-3.35 00:18:46.346 Starting 4 threads 00:18:47.721 00:18:47.721 job0: (groupid=0, jobs=1): err= 0: pid=2794184: Sun Jul 14 04:35:07 2024 00:18:47.721 read: IOPS=2684, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1008msec) 00:18:47.721 slat (usec): min=2, max=15642, avg=187.43, stdev=1085.84 00:18:47.721 clat (usec): min=2559, max=59841, avg=24216.89, stdev=10022.64 00:18:47.721 lat (usec): min=3683, max=59845, avg=24404.32, stdev=10067.77 00:18:47.721 clat percentiles (usec): 00:18:47.721 | 1.00th=[ 7177], 5.00th=[ 9634], 10.00th=[11076], 20.00th=[12387], 00:18:47.721 | 30.00th=[19530], 40.00th=[22152], 50.00th=[23987], 60.00th=[26346], 00:18:47.721 | 70.00th=[29492], 80.00th=[31589], 90.00th=[39584], 95.00th=[42730], 00:18:47.721 | 99.00th=[45351], 99.50th=[46400], 99.90th=[49546], 99.95th=[49546], 00:18:47.721 | 99.99th=[60031] 00:18:47.721 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:18:47.722 slat (usec): min=3, max=20838, avg=152.65, stdev=871.11 00:18:47.722 clat (usec): min=4859, max=49249, avg=20083.05, stdev=8389.65 00:18:47.722 lat (usec): min=4863, max=49254, avg=20235.70, stdev=8439.59 00:18:47.722 clat percentiles (usec): 00:18:47.722 | 1.00th=[ 5538], 5.00th=[ 7767], 10.00th=[ 9503], 20.00th=[13173], 00:18:47.722 | 30.00th=[14877], 40.00th=[16581], 50.00th=[19006], 60.00th=[21103], 00:18:47.722 | 70.00th=[24249], 80.00th=[28181], 90.00th=[31589], 95.00th=[34341], 00:18:47.722 | 99.00th=[40109], 99.50th=[42206], 99.90th=[49021], 99.95th=[49021], 00:18:47.722 | 99.99th=[49021] 00:18:47.722 bw ( KiB/s): min=12288, max=12288, per=22.03%, avg=12288.00, stdev= 0.00, samples=2 00:18:47.722 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:18:47.722 lat (msec) : 4=0.19%, 10=8.39%, 20=36.00%, 50=55.40%, 100=0.02% 00:18:47.722 cpu : usr=3.38%, sys=5.56%, ctx=302, majf=0, minf=1 00:18:47.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:47.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.722 issued rwts: total=2706,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.722 job1: (groupid=0, jobs=1): err= 0: pid=2794185: Sun Jul 14 04:35:07 2024 00:18:47.722 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:18:47.722 slat (usec): min=2, max=47049, avg=111.12, stdev=906.88 00:18:47.722 clat (usec): min=7975, max=65251, avg=14314.60, stdev=8118.54 00:18:47.722 lat (usec): min=7984, max=65282, avg=14425.72, stdev=8163.76 00:18:47.722 clat percentiles (usec): 00:18:47.722 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[11600], 00:18:47.722 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:18:47.722 | 70.00th=[13435], 80.00th=[14615], 90.00th=[16909], 95.00th=[19530], 00:18:47.722 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:18:47.722 | 99.99th=[65274] 00:18:47.722 write: IOPS=4314, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1003msec); 0 zone resets 00:18:47.722 slat (usec): min=3, max=7205, avg=119.01, stdev=589.74 00:18:47.722 clat (usec): min=597, max=35582, avg=15710.28, stdev=6114.74 00:18:47.722 lat (usec): min=3352, max=35594, avg=15829.29, stdev=6154.78 00:18:47.722 clat percentiles (usec): 00:18:47.722 | 1.00th=[ 5080], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[11731], 00:18:47.722 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[14615], 00:18:47.722 | 70.00th=[16909], 80.00th=[20579], 90.00th=[26084], 95.00th=[28967], 00:18:47.722 | 99.00th=[31851], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:18:47.722 | 99.99th=[35390] 00:18:47.722 bw ( KiB/s): min=16384, max=17208, per=30.11%, avg=16796.00, stdev=582.66, samples=2 00:18:47.722 iops : min= 4096, max= 4302, avg=4199.00, stdev=145.66, samples=2 00:18:47.722 lat (usec) : 750=0.01% 00:18:47.722 lat (msec) : 4=0.04%, 10=9.14%, 20=77.50%, 50=11.80%, 100=1.51% 00:18:47.722 cpu : usr=2.99%, sys=6.59%, ctx=473, majf=0, minf=1 00:18:47.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:47.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.722 issued rwts: total=4096,4327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.722 job2: (groupid=0, jobs=1): err= 0: pid=2794192: Sun Jul 14 04:35:07 2024 00:18:47.722 read: IOPS=2912, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1009msec) 00:18:47.722 slat (usec): min=3, max=24952, avg=190.49, stdev=1205.55 00:18:47.722 clat (usec): min=2606, max=64995, avg=22763.02, stdev=11576.47 00:18:47.722 lat (usec): min=8965, max=68593, avg=22953.51, stdev=11675.93 00:18:47.722 clat percentiles (usec): 00:18:47.722 | 1.00th=[ 9896], 5.00th=[12387], 10.00th=[13042], 20.00th=[15401], 00:18:47.722 | 30.00th=[16909], 40.00th=[18482], 50.00th=[19268], 60.00th=[19792], 00:18:47.722 | 70.00th=[21627], 80.00th=[26608], 90.00th=[41681], 95.00th=[48497], 00:18:47.722 | 99.00th=[62129], 99.50th=[62129], 99.90th=[63177], 99.95th=[63177], 00:18:47.722 | 99.99th=[64750] 00:18:47.722 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:18:47.722 slat (usec): min=4, max=13075, avg=135.05, stdev=852.25 00:18:47.722 clat (usec): min=7986, max=63137, avg=19662.67, stdev=7784.71 00:18:47.722 lat (usec): min=7995, max=63198, avg=19797.72, stdev=7850.32 00:18:47.722 clat percentiles (usec): 00:18:47.722 | 1.00th=[ 9503], 5.00th=[11600], 10.00th=[12125], 20.00th=[13960], 00:18:47.722 | 30.00th=[15139], 40.00th=[15926], 50.00th=[17695], 60.00th=[19530], 00:18:47.722 | 70.00th=[21365], 80.00th=[25035], 90.00th=[29492], 95.00th=[32637], 00:18:47.722 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55837], 99.95th=[61604], 00:18:47.722 | 99.99th=[63177] 00:18:47.722 bw ( KiB/s): min= 9800, max=14776, per=22.03%, avg=12288.00, stdev=3518.56, samples=2 00:18:47.722 iops : min= 2450, max= 3694, avg=3072.00, stdev=879.64, samples=2 00:18:47.722 lat (msec) : 4=0.02%, 10=1.18%, 20=62.17%, 50=34.04%, 100=2.60% 00:18:47.722 cpu : usr=3.87%, sys=6.15%, ctx=195, majf=0, minf=1 00:18:47.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:47.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.722 issued rwts: total=2939,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.722 job3: (groupid=0, jobs=1): err= 0: pid=2794193: Sun Jul 14 04:35:07 2024 00:18:47.722 read: IOPS=3218, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1009msec) 00:18:47.722 slat (usec): min=2, max=28849, avg=118.15, stdev=991.05 00:18:47.722 clat (usec): min=1282, max=47121, avg=17536.20, stdev=8554.19 00:18:47.722 lat (usec): min=1286, max=50348, avg=17654.36, stdev=8610.91 00:18:47.722 clat percentiles (usec): 00:18:47.722 | 1.00th=[ 3982], 5.00th=[ 5800], 10.00th=[ 8029], 20.00th=[11076], 00:18:47.722 | 30.00th=[12780], 40.00th=[14746], 50.00th=[15664], 60.00th=[17695], 00:18:47.722 | 70.00th=[19792], 80.00th=[21890], 90.00th=[29492], 95.00th=[36963], 00:18:47.722 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:18:47.722 | 99.99th=[46924] 00:18:47.722 write: IOPS=3565, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1009msec); 0 zone resets 00:18:47.722 slat (usec): min=3, max=14294, avg=137.53, stdev=766.32 00:18:47.722 clat (usec): min=466, max=69761, avg=19784.76, stdev=12579.96 00:18:47.722 lat (usec): min=529, max=69767, avg=19922.29, stdev=12630.79 00:18:47.722 clat percentiles (usec): 00:18:47.722 | 1.00th=[ 1221], 5.00th=[ 2737], 10.00th=[ 6194], 20.00th=[ 9634], 00:18:47.722 | 30.00th=[12256], 40.00th=[15533], 50.00th=[17695], 60.00th=[20579], 00:18:47.722 | 70.00th=[24511], 80.00th=[27132], 90.00th=[35390], 95.00th=[46400], 00:18:47.722 | 99.00th=[58983], 99.50th=[60031], 99.90th=[69731], 99.95th=[69731], 00:18:47.722 | 99.99th=[69731] 00:18:47.722 bw ( KiB/s): min=11608, max=16576, per=25.27%, avg=14092.00, stdev=3512.91, samples=2 00:18:47.722 iops : min= 2902, max= 4144, avg=3523.00, stdev=878.23, samples=2 00:18:47.722 lat (usec) : 500=0.01%, 1000=0.13% 00:18:47.722 lat (msec) : 2=1.77%, 4=1.99%, 10=15.16%, 20=44.69%, 50=33.97% 00:18:47.722 lat (msec) : 100=2.28% 00:18:47.722 cpu : usr=3.97%, sys=5.36%, ctx=429, majf=0, minf=1 00:18:47.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:47.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.722 issued rwts: total=3247,3598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.722 00:18:47.722 Run status group 0 (all jobs): 00:18:47.722 READ: bw=50.3MiB/s (52.7MB/s), 10.5MiB/s-16.0MiB/s (11.0MB/s-16.7MB/s), io=50.7MiB (53.2MB), run=1003-1009msec 00:18:47.722 WRITE: bw=54.5MiB/s (57.1MB/s), 11.9MiB/s-16.9MiB/s (12.5MB/s-17.7MB/s), io=55.0MiB (57.6MB), run=1003-1009msec 00:18:47.722 00:18:47.722 Disk stats (read/write): 00:18:47.722 nvme0n1: ios=2278/2560, merge=0/0, ticks=20719/21675, in_queue=42394, util=86.67% 00:18:47.722 nvme0n2: ios=3315/3584, merge=0/0, ticks=19782/22503, in_queue=42285, util=99.59% 00:18:47.722 nvme0n3: ios=2582/2927, merge=0/0, ticks=19002/15852, in_queue=34854, util=99.06% 00:18:47.722 nvme0n4: ios=2560/2951, merge=0/0, ticks=38080/50920, in_queue=89000, util=88.82% 00:18:47.722 04:35:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:47.722 [global] 00:18:47.722 thread=1 00:18:47.722 invalidate=1 00:18:47.722 rw=randwrite 00:18:47.722 time_based=1 00:18:47.722 runtime=1 00:18:47.722 ioengine=libaio 00:18:47.722 direct=1 00:18:47.722 bs=4096 00:18:47.722 iodepth=128 00:18:47.722 norandommap=0 00:18:47.722 numjobs=1 00:18:47.722 00:18:47.722 verify_dump=1 00:18:47.722 verify_backlog=512 00:18:47.722 verify_state_save=0 00:18:47.722 do_verify=1 00:18:47.722 verify=crc32c-intel 00:18:47.722 [job0] 00:18:47.722 filename=/dev/nvme0n1 00:18:47.722 [job1] 00:18:47.722 filename=/dev/nvme0n2 00:18:47.722 [job2] 00:18:47.722 filename=/dev/nvme0n3 00:18:47.722 [job3] 00:18:47.722 filename=/dev/nvme0n4 00:18:47.722 Could not set queue depth (nvme0n1) 00:18:47.722 Could not set queue depth (nvme0n2) 00:18:47.722 Could not set queue depth (nvme0n3) 00:18:47.722 Could not set queue depth (nvme0n4) 00:18:47.980 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.980 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.980 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.981 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.981 fio-3.35 00:18:47.981 Starting 4 threads 00:18:49.361 00:18:49.361 job0: (groupid=0, jobs=1): err= 0: pid=2794423: Sun Jul 14 04:35:09 2024 00:18:49.361 read: IOPS=3178, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1006msec) 00:18:49.361 slat (usec): min=3, max=25790, avg=148.90, stdev=1130.96 00:18:49.361 clat (usec): min=2036, max=41090, avg=18367.59, stdev=6915.41 00:18:49.361 lat (usec): min=7383, max=41106, avg=18516.49, stdev=6961.81 00:18:49.361 clat percentiles (usec): 00:18:49.361 | 1.00th=[ 9896], 5.00th=[10290], 10.00th=[11469], 20.00th=[13042], 00:18:49.361 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15401], 60.00th=[17433], 00:18:49.361 | 70.00th=[20055], 80.00th=[24249], 90.00th=[30540], 95.00th=[33424], 00:18:49.361 | 99.00th=[36439], 99.50th=[36963], 99.90th=[41157], 99.95th=[41157], 00:18:49.361 | 99.99th=[41157] 00:18:49.361 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:18:49.361 slat (usec): min=4, max=16134, avg=134.79, stdev=762.34 00:18:49.361 clat (usec): min=1147, max=51829, avg=19230.43, stdev=9409.68 00:18:49.361 lat (usec): min=1156, max=51846, avg=19365.22, stdev=9464.44 00:18:49.361 clat percentiles (usec): 00:18:49.361 | 1.00th=[ 3064], 5.00th=[ 7308], 10.00th=[ 9634], 20.00th=[11338], 00:18:49.361 | 30.00th=[13042], 40.00th=[15664], 50.00th=[17171], 60.00th=[21103], 00:18:49.361 | 70.00th=[22414], 80.00th=[24773], 90.00th=[30278], 95.00th=[37487], 00:18:49.361 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:18:49.361 | 99.99th=[51643] 00:18:49.361 bw ( KiB/s): min=13272, max=15384, per=20.60%, avg=14328.00, stdev=1493.41, samples=2 00:18:49.361 iops : min= 3318, max= 3846, avg=3582.00, stdev=373.35, samples=2 00:18:49.361 lat (msec) : 2=0.06%, 4=0.60%, 10=7.95%, 20=53.26%, 50=37.22% 00:18:49.361 lat (msec) : 100=0.91% 00:18:49.361 cpu : usr=5.57%, sys=8.86%, ctx=348, majf=0, minf=9 00:18:49.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:49.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.361 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.361 job1: (groupid=0, jobs=1): err= 0: pid=2794433: Sun Jul 14 04:35:09 2024 00:18:49.361 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:18:49.361 slat (usec): min=3, max=14659, avg=91.02, stdev=545.06 00:18:49.361 clat (usec): min=4686, max=52062, avg=12268.19, stdev=3152.37 00:18:49.361 lat (usec): min=5219, max=52071, avg=12359.21, stdev=3179.39 00:18:49.361 clat percentiles (usec): 00:18:49.361 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10421], 00:18:49.361 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[12256], 00:18:49.361 | 70.00th=[12649], 80.00th=[13435], 90.00th=[15795], 95.00th=[17957], 00:18:49.361 | 99.00th=[22938], 99.50th=[24773], 99.90th=[52167], 99.95th=[52167], 00:18:49.361 | 99.99th=[52167] 00:18:49.361 write: IOPS=5509, BW=21.5MiB/s (22.6MB/s)(21.7MiB/1006msec); 0 zone resets 00:18:49.361 slat (usec): min=3, max=10147, avg=81.35, stdev=413.29 00:18:49.361 clat (usec): min=2744, max=27456, avg=11572.17, stdev=3901.30 00:18:49.361 lat (usec): min=2755, max=27481, avg=11653.52, stdev=3917.66 00:18:49.361 clat percentiles (usec): 00:18:49.361 | 1.00th=[ 5211], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 8848], 00:18:49.361 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10945], 60.00th=[11338], 00:18:49.361 | 70.00th=[12256], 80.00th=[13304], 90.00th=[16319], 95.00th=[20841], 00:18:49.361 | 99.00th=[24773], 99.50th=[25297], 99.90th=[27395], 99.95th=[27395], 00:18:49.361 | 99.99th=[27395] 00:18:49.361 bw ( KiB/s): min=20896, max=22432, per=31.14%, avg=21664.00, stdev=1086.12, samples=2 00:18:49.361 iops : min= 5224, max= 5608, avg=5416.00, stdev=271.53, samples=2 00:18:49.361 lat (msec) : 4=0.14%, 10=25.43%, 20=70.29%, 50=4.07%, 100=0.07% 00:18:49.361 cpu : usr=8.46%, sys=13.03%, ctx=541, majf=0, minf=11 00:18:49.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:49.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.361 issued rwts: total=5120,5543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.361 job2: (groupid=0, jobs=1): err= 0: pid=2794456: Sun Jul 14 04:35:09 2024 00:18:49.361 read: IOPS=4143, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1004msec) 00:18:49.361 slat (usec): min=2, max=14386, avg=102.55, stdev=566.25 00:18:49.361 clat (usec): min=1835, max=37521, avg=13795.50, stdev=3061.44 00:18:49.361 lat (usec): min=2948, max=38865, avg=13898.05, stdev=3087.42 00:18:49.361 clat percentiles (usec): 00:18:49.361 | 1.00th=[ 6194], 5.00th=[10290], 10.00th=[11076], 20.00th=[12256], 00:18:49.361 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[13960], 00:18:49.361 | 70.00th=[14353], 80.00th=[15139], 90.00th=[16581], 95.00th=[18744], 00:18:49.361 | 99.00th=[26608], 99.50th=[26608], 99.90th=[31327], 99.95th=[31327], 00:18:49.361 | 99.99th=[37487] 00:18:49.361 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:18:49.361 slat (usec): min=4, max=17718, avg=108.71, stdev=725.50 00:18:49.361 clat (usec): min=7162, max=38025, avg=15097.02, stdev=4833.25 00:18:49.361 lat (usec): min=7306, max=38095, avg=15205.73, stdev=4858.24 00:18:49.361 clat percentiles (usec): 00:18:49.362 | 1.00th=[ 8225], 5.00th=[10421], 10.00th=[11600], 20.00th=[12649], 00:18:49.362 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:18:49.362 | 70.00th=[14484], 80.00th=[16909], 90.00th=[20317], 95.00th=[30016], 00:18:49.362 | 99.00th=[32900], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:18:49.362 | 99.99th=[38011] 00:18:49.362 bw ( KiB/s): min=16952, max=19400, per=26.13%, avg=18176.00, stdev=1731.00, samples=2 00:18:49.362 iops : min= 4238, max= 4850, avg=4544.00, stdev=432.75, samples=2 00:18:49.362 lat (msec) : 2=0.01%, 4=0.01%, 10=4.22%, 20=87.79%, 50=7.97% 00:18:49.362 cpu : usr=6.38%, sys=12.26%, ctx=420, majf=0, minf=13 00:18:49.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:49.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.362 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.362 job3: (groupid=0, jobs=1): err= 0: pid=2794466: Sun Jul 14 04:35:09 2024 00:18:49.362 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:18:49.362 slat (usec): min=2, max=17146, avg=132.61, stdev=830.50 00:18:49.362 clat (usec): min=8261, max=59565, avg=16007.03, stdev=6737.86 00:18:49.362 lat (usec): min=8271, max=59569, avg=16139.64, stdev=6809.89 00:18:49.362 clat percentiles (usec): 00:18:49.362 | 1.00th=[ 9241], 5.00th=[11076], 10.00th=[11469], 20.00th=[12649], 00:18:49.362 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14484], 00:18:49.362 | 70.00th=[16057], 80.00th=[16909], 90.00th=[20841], 95.00th=[32900], 00:18:49.362 | 99.00th=[45876], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:18:49.362 | 99.99th=[59507] 00:18:49.362 write: IOPS=3736, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1006msec); 0 zone resets 00:18:49.362 slat (usec): min=3, max=25636, avg=127.26, stdev=733.45 00:18:49.362 clat (usec): min=5574, max=64365, avg=18269.51, stdev=10625.65 00:18:49.362 lat (usec): min=6287, max=64377, avg=18396.77, stdev=10671.23 00:18:49.362 clat percentiles (usec): 00:18:49.362 | 1.00th=[ 8094], 5.00th=[ 9765], 10.00th=[11600], 20.00th=[12518], 00:18:49.362 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13698], 60.00th=[14222], 00:18:49.362 | 70.00th=[19268], 80.00th=[22676], 90.00th=[30540], 95.00th=[49021], 00:18:49.362 | 99.00th=[57410], 99.50th=[58459], 99.90th=[62129], 99.95th=[62129], 00:18:49.362 | 99.99th=[64226] 00:18:49.362 bw ( KiB/s): min=11912, max=17144, per=20.89%, avg=14528.00, stdev=3699.58, samples=2 00:18:49.362 iops : min= 2978, max= 4286, avg=3632.00, stdev=924.90, samples=2 00:18:49.362 lat (msec) : 10=4.14%, 20=75.46%, 50=17.68%, 100=2.72% 00:18:49.362 cpu : usr=5.37%, sys=8.56%, ctx=412, majf=0, minf=17 00:18:49.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:49.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.362 issued rwts: total=3584,3759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.362 00:18:49.362 Run status group 0 (all jobs): 00:18:49.362 READ: bw=62.4MiB/s (65.4MB/s), 12.4MiB/s-19.9MiB/s (13.0MB/s-20.8MB/s), io=62.7MiB (65.8MB), run=1004-1006msec 00:18:49.362 WRITE: bw=67.9MiB/s (71.2MB/s), 13.9MiB/s-21.5MiB/s (14.6MB/s-22.6MB/s), io=68.3MiB (71.7MB), run=1004-1006msec 00:18:49.362 00:18:49.362 Disk stats (read/write): 00:18:49.362 nvme0n1: ios=2610/3047, merge=0/0, ticks=44564/58571, in_queue=103135, util=90.48% 00:18:49.362 nvme0n2: ios=4375/4608, merge=0/0, ticks=41086/41405, in_queue=82491, util=97.04% 00:18:49.362 nvme0n3: ios=3630/3671, merge=0/0, ticks=27459/31100, in_queue=58559, util=96.11% 00:18:49.362 nvme0n4: ios=2745/3072, merge=0/0, ticks=23614/26714, in_queue=50328, util=93.75% 00:18:49.362 04:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:49.362 04:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2794639 00:18:49.362 04:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:49.362 04:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:49.362 [global] 00:18:49.362 thread=1 00:18:49.362 invalidate=1 00:18:49.362 rw=read 00:18:49.362 time_based=1 00:18:49.362 runtime=10 00:18:49.362 ioengine=libaio 00:18:49.362 direct=1 00:18:49.362 bs=4096 00:18:49.362 iodepth=1 00:18:49.362 norandommap=1 00:18:49.362 numjobs=1 00:18:49.362 00:18:49.362 [job0] 00:18:49.362 filename=/dev/nvme0n1 00:18:49.362 [job1] 00:18:49.362 filename=/dev/nvme0n2 00:18:49.362 [job2] 00:18:49.362 filename=/dev/nvme0n3 00:18:49.362 [job3] 00:18:49.362 filename=/dev/nvme0n4 00:18:49.362 Could not set queue depth (nvme0n1) 00:18:49.362 Could not set queue depth (nvme0n2) 00:18:49.362 Could not set queue depth (nvme0n3) 00:18:49.362 Could not set queue depth (nvme0n4) 00:18:49.362 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.362 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.362 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.362 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.362 fio-3.35 00:18:49.362 Starting 4 threads 00:18:52.645 04:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:52.645 04:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:52.645 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2289664, buflen=4096 00:18:52.645 fio: pid=2794768, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:52.645 04:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.645 04:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:52.645 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=23060480, buflen=4096 00:18:52.645 fio: pid=2794767, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:52.903 04:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.903 04:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:52.903 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=1646592, buflen=4096 00:18:52.903 fio: pid=2794765, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:53.160 04:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.160 04:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:53.160 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=33558528, buflen=4096 00:18:53.160 fio: pid=2794766, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:53.160 00:18:53.160 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2794765: Sun Jul 14 04:35:13 2024 00:18:53.160 read: IOPS=116, BW=466KiB/s (477kB/s)(1608KiB/3451msec) 00:18:53.160 slat (usec): min=5, max=18008, avg=121.35, stdev=1186.99 00:18:53.160 clat (usec): min=329, max=41454, avg=8459.44, stdev=16123.45 00:18:53.160 lat (usec): min=342, max=52973, avg=8558.29, stdev=16208.39 00:18:53.160 clat percentiles (usec): 00:18:53.160 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 367], 20.00th=[ 396], 00:18:53.160 | 30.00th=[ 445], 40.00th=[ 474], 50.00th=[ 490], 60.00th=[ 506], 00:18:53.160 | 70.00th=[ 523], 80.00th=[ 709], 90.00th=[41157], 95.00th=[41157], 00:18:53.160 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:18:53.160 | 99.99th=[41681] 00:18:53.160 bw ( KiB/s): min= 96, max= 1920, per=3.05%, avg=484.00, stdev=713.85, samples=6 00:18:53.160 iops : min= 24, max= 480, avg=121.00, stdev=178.46, samples=6 00:18:53.160 lat (usec) : 500=57.32%, 750=22.58% 00:18:53.160 lat (msec) : 20=0.25%, 50=19.60% 00:18:53.160 cpu : usr=0.09%, sys=0.32%, ctx=407, majf=0, minf=1 00:18:53.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.160 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.160 issued rwts: total=403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.160 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2794766: Sun Jul 14 04:35:13 2024 00:18:53.161 read: IOPS=2196, BW=8784KiB/s (8995kB/s)(32.0MiB/3731msec) 00:18:53.161 slat (usec): min=5, max=3527, avg=11.58, stdev=40.21 00:18:53.161 clat (usec): min=312, max=51256, avg=438.12, stdev=1605.67 00:18:53.161 lat (usec): min=318, max=51270, avg=449.70, stdev=1619.69 00:18:53.161 clat percentiles (usec): 00:18:53.161 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:18:53.161 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 375], 00:18:53.161 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 420], 95.00th=[ 486], 00:18:53.161 | 99.00th=[ 562], 99.50th=[ 611], 99.90th=[41157], 99.95th=[41157], 00:18:53.161 | 99.99th=[51119] 00:18:53.161 bw ( KiB/s): min= 1836, max=11104, per=57.85%, avg=9169.71, stdev=3266.08, samples=7 00:18:53.161 iops : min= 459, max= 2776, avg=2292.43, stdev=816.52, samples=7 00:18:53.161 lat (usec) : 500=95.91%, 750=3.86%, 1000=0.02% 00:18:53.161 lat (msec) : 2=0.01%, 4=0.02%, 20=0.01%, 50=0.13%, 100=0.01% 00:18:53.161 cpu : usr=1.77%, sys=3.57%, ctx=8199, majf=0, minf=1 00:18:53.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.161 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.161 issued rwts: total=8194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.161 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2794767: Sun Jul 14 04:35:13 2024 00:18:53.161 read: IOPS=1768, BW=7073KiB/s (7243kB/s)(22.0MiB/3184msec) 00:18:53.161 slat (nsec): min=5266, max=56030, avg=11915.86, stdev=5731.54 00:18:53.161 clat (usec): min=325, max=42220, avg=546.57, stdev=557.94 00:18:53.161 lat (usec): min=335, max=42226, avg=558.48, stdev=558.04 00:18:53.161 clat percentiles (usec): 00:18:53.161 | 1.00th=[ 347], 5.00th=[ 490], 10.00th=[ 515], 20.00th=[ 523], 00:18:53.161 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:18:53.161 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 586], 00:18:53.161 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 693], 99.95th=[ 996], 00:18:53.161 | 99.99th=[42206] 00:18:53.161 bw ( KiB/s): min= 6624, max= 7720, per=44.90%, avg=7117.33, stdev=383.76, samples=6 00:18:53.161 iops : min= 1656, max= 1930, avg=1779.33, stdev=95.94, samples=6 00:18:53.161 lat (usec) : 500=6.02%, 750=93.87%, 1000=0.05% 00:18:53.161 lat (msec) : 4=0.02%, 50=0.02% 00:18:53.161 cpu : usr=1.19%, sys=3.46%, ctx=5631, majf=0, minf=1 00:18:53.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.161 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.161 issued rwts: total=5631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.161 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2794768: Sun Jul 14 04:35:13 2024 00:18:53.161 read: IOPS=190, BW=762KiB/s (780kB/s)(2236KiB/2935msec) 00:18:53.161 slat (nsec): min=6408, max=38871, avg=13886.06, stdev=4476.07 00:18:53.161 clat (usec): min=420, max=41263, avg=5191.81, stdev=12989.71 00:18:53.161 lat (usec): min=434, max=41279, avg=5205.66, stdev=12992.33 00:18:53.161 clat percentiles (usec): 00:18:53.161 | 1.00th=[ 437], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 474], 00:18:53.161 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 482], 60.00th=[ 490], 00:18:53.161 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[41157], 95.00th=[41157], 00:18:53.161 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:53.161 | 99.99th=[41157] 00:18:53.161 bw ( KiB/s): min= 96, max= 3992, per=5.54%, avg=878.40, stdev=1740.56, samples=5 00:18:53.161 iops : min= 24, max= 998, avg=219.60, stdev=435.14, samples=5 00:18:53.161 lat (usec) : 500=75.00%, 750=13.21% 00:18:53.161 lat (msec) : 50=11.61% 00:18:53.161 cpu : usr=0.14%, sys=0.24%, ctx=560, majf=0, minf=1 00:18:53.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.161 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.161 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.161 00:18:53.161 Run status group 0 (all jobs): 00:18:53.161 READ: bw=15.5MiB/s (16.2MB/s), 466KiB/s-8784KiB/s (477kB/s-8995kB/s), io=57.8MiB (60.6MB), run=2935-3731msec 00:18:53.161 00:18:53.161 Disk stats (read/write): 00:18:53.161 nvme0n1: ios=399/0, merge=0/0, ticks=3280/0, in_queue=3280, util=95.05% 00:18:53.161 nvme0n2: ios=8114/0, merge=0/0, ticks=4251/0, in_queue=4251, util=99.84% 00:18:53.161 nvme0n3: ios=5523/0, merge=0/0, ticks=2961/0, in_queue=2961, util=96.82% 00:18:53.161 nvme0n4: ios=557/0, merge=0/0, ticks=2819/0, in_queue=2819, util=96.74% 00:18:53.418 04:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.418 04:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:53.675 04:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.675 04:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:53.932 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.932 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:54.189 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:54.189 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:54.446 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:54.446 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2794639 00:18:54.446 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:54.446 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:54.703 nvmf hotplug test: fio failed as expected 00:18:54.703 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.961 04:35:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:54.961 rmmod nvme_tcp 00:18:54.961 rmmod nvme_fabrics 00:18:54.961 rmmod nvme_keyring 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2792645 ']' 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2792645 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 2792645 ']' 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 2792645 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2792645 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2792645' 00:18:54.961 killing process with pid 2792645 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 2792645 00:18:54.961 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 2792645 00:18:55.220 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.220 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:55.220 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:55.220 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.220 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:55.220 04:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.220 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.220 04:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.750 04:35:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:57.750 00:18:57.750 real 0m23.279s 00:18:57.750 user 1m21.848s 00:18:57.750 sys 0m6.647s 00:18:57.750 04:35:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:57.750 04:35:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.750 ************************************ 00:18:57.750 END TEST nvmf_fio_target 00:18:57.750 ************************************ 00:18:57.750 04:35:17 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:57.750 04:35:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:57.750 04:35:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:57.750 04:35:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:57.750 ************************************ 00:18:57.750 START TEST nvmf_bdevio 00:18:57.750 ************************************ 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:57.750 * Looking for test storage... 00:18:57.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:57.750 04:35:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:59.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:59.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.653 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:59.654 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:59.654 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:59.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:18:59.654 00:18:59.654 --- 10.0.0.2 ping statistics --- 00:18:59.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.654 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:59.654 00:18:59.654 --- 10.0.0.1 ping statistics --- 00:18:59.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.654 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2797386 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2797386 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 2797386 ']' 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:59.654 04:35:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.654 [2024-07-14 04:35:19.735106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:59.654 [2024-07-14 04:35:19.735197] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.654 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.654 [2024-07-14 04:35:19.802051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.913 [2024-07-14 04:35:19.889030] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.913 [2024-07-14 04:35:19.889084] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.913 [2024-07-14 04:35:19.889113] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.913 [2024-07-14 04:35:19.889125] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.913 [2024-07-14 04:35:19.889135] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.913 [2024-07-14 04:35:19.889201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:59.913 [2024-07-14 04:35:19.889271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:59.913 [2024-07-14 04:35:19.889322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:59.913 [2024-07-14 04:35:19.889325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.913 [2024-07-14 04:35:20.032465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.913 Malloc0 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.913 [2024-07-14 04:35:20.083636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:59.913 { 00:18:59.913 "params": { 00:18:59.913 "name": "Nvme$subsystem", 00:18:59.913 "trtype": "$TEST_TRANSPORT", 00:18:59.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.913 "adrfam": "ipv4", 00:18:59.913 "trsvcid": "$NVMF_PORT", 00:18:59.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.913 "hdgst": ${hdgst:-false}, 00:18:59.913 "ddgst": ${ddgst:-false} 00:18:59.913 }, 00:18:59.913 "method": "bdev_nvme_attach_controller" 00:18:59.913 } 00:18:59.913 EOF 00:18:59.913 )") 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:59.913 04:35:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:59.913 "params": { 00:18:59.913 "name": "Nvme1", 00:18:59.913 "trtype": "tcp", 00:18:59.913 "traddr": "10.0.0.2", 00:18:59.913 "adrfam": "ipv4", 00:18:59.913 "trsvcid": "4420", 00:18:59.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.913 "hdgst": false, 00:18:59.913 "ddgst": false 00:18:59.913 }, 00:18:59.913 "method": "bdev_nvme_attach_controller" 00:18:59.913 }' 00:19:00.172 [2024-07-14 04:35:20.126008] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:00.172 [2024-07-14 04:35:20.126087] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2797410 ] 00:19:00.172 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.172 [2024-07-14 04:35:20.186698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.172 [2024-07-14 04:35:20.279805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.172 [2024-07-14 04:35:20.279857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.172 [2024-07-14 04:35:20.279859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.430 I/O targets: 00:19:00.430 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:00.430 00:19:00.430 00:19:00.430 CUnit - A unit testing framework for C - Version 2.1-3 00:19:00.430 http://cunit.sourceforge.net/ 00:19:00.430 00:19:00.430 00:19:00.430 Suite: bdevio tests on: Nvme1n1 00:19:00.430 Test: blockdev write read block ...passed 00:19:00.430 Test: blockdev write zeroes read block ...passed 00:19:00.430 Test: blockdev write zeroes read no split ...passed 00:19:00.430 Test: blockdev write zeroes read split ...passed 00:19:00.688 Test: blockdev write zeroes read split partial ...passed 00:19:00.688 Test: blockdev reset ...[2024-07-14 04:35:20.667856] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.688 [2024-07-14 04:35:20.668077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd8a00 (9): Bad file descriptor 00:19:00.688 [2024-07-14 04:35:20.685658] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:00.688 passed 00:19:00.688 Test: blockdev write read 8 blocks ...passed 00:19:00.688 Test: blockdev write read size > 128k ...passed 00:19:00.688 Test: blockdev write read invalid size ...passed 00:19:00.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:00.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:00.688 Test: blockdev write read max offset ...passed 00:19:00.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:00.688 Test: blockdev writev readv 8 blocks ...passed 00:19:00.688 Test: blockdev writev readv 30 x 1block ...passed 00:19:00.946 Test: blockdev writev readv block ...passed 00:19:00.946 Test: blockdev writev readv size > 128k ...passed 00:19:00.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:00.946 Test: blockdev comparev and writev ...[2024-07-14 04:35:20.902786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.946 [2024-07-14 04:35:20.902822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.902846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.946 [2024-07-14 04:35:20.902863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.903276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.946 [2024-07-14 04:35:20.903300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.903322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.946 [2024-07-14 04:35:20.903339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.903750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.946 [2024-07-14 04:35:20.903773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.903794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.946 [2024-07-14 04:35:20.903809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.904227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.946 [2024-07-14 04:35:20.904250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.904271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.946 [2024-07-14 04:35:20.904287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:00.946 passed 00:19:00.946 Test: blockdev nvme passthru rw ...passed 00:19:00.946 Test: blockdev nvme passthru vendor specific ...[2024-07-14 04:35:20.988283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.946 [2024-07-14 04:35:20.988308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.988522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.946 [2024-07-14 04:35:20.988546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.988754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.946 [2024-07-14 04:35:20.988777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:00.946 [2024-07-14 04:35:20.988984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.946 [2024-07-14 04:35:20.989008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:00.946 passed 00:19:00.946 Test: blockdev nvme admin passthru ...passed 00:19:00.946 Test: blockdev copy ...passed 00:19:00.946 00:19:00.946 Run Summary: Type Total Ran Passed Failed Inactive 00:19:00.946 suites 1 1 n/a 0 0 00:19:00.946 tests 23 23 23 0 0 00:19:00.946 asserts 152 152 152 0 n/a 00:19:00.946 00:19:00.946 Elapsed time = 1.189 seconds 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:01.203 rmmod nvme_tcp 00:19:01.203 rmmod nvme_fabrics 00:19:01.203 rmmod nvme_keyring 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2797386 ']' 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2797386 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 2797386 ']' 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 2797386 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2797386 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2797386' 00:19:01.203 killing process with pid 2797386 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 2797386 00:19:01.203 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 2797386 00:19:01.461 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:01.461 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:01.461 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:01.461 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.461 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.461 04:35:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.461 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.461 04:35:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.019 04:35:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:04.019 00:19:04.019 real 0m6.222s 00:19:04.019 user 0m9.437s 00:19:04.019 sys 0m2.106s 00:19:04.019 04:35:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:04.019 04:35:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.019 ************************************ 00:19:04.019 END TEST nvmf_bdevio 00:19:04.019 ************************************ 00:19:04.019 04:35:23 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:04.019 04:35:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:04.019 04:35:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:04.019 04:35:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.019 ************************************ 00:19:04.019 START TEST nvmf_auth_target 00:19:04.019 ************************************ 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:04.019 * Looking for test storage... 00:19:04.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.019 04:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:05.931 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:05.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:05.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:05.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:19:05.931 00:19:05.931 --- 10.0.0.2 ping statistics --- 00:19:05.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.931 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:19:05.931 00:19:05.931 --- 10.0.0.1 ping statistics --- 00:19:05.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.931 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2799477 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2799477 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2799477 ']' 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:05.931 04:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.931 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:05.932 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:05.932 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:05.932 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.932 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2799560 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3ca19a98bef9dffddf80afb647c4e0b8bffabe2ea2cc55dc 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.buT 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3ca19a98bef9dffddf80afb647c4e0b8bffabe2ea2cc55dc 0 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3ca19a98bef9dffddf80afb647c4e0b8bffabe2ea2cc55dc 0 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3ca19a98bef9dffddf80afb647c4e0b8bffabe2ea2cc55dc 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.buT 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.buT 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.buT 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=87fe8fdc38f442ba7d62fb0e7bf4fad60ea7ba4f3dafe18e52c2985a2f116e55 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xbL 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 87fe8fdc38f442ba7d62fb0e7bf4fad60ea7ba4f3dafe18e52c2985a2f116e55 3 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 87fe8fdc38f442ba7d62fb0e7bf4fad60ea7ba4f3dafe18e52c2985a2f116e55 3 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=87fe8fdc38f442ba7d62fb0e7bf4fad60ea7ba4f3dafe18e52c2985a2f116e55 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xbL 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xbL 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.xbL 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.191 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d3718bbdccf389e75bcb873324ee8d16 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.vfl 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d3718bbdccf389e75bcb873324ee8d16 1 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d3718bbdccf389e75bcb873324ee8d16 1 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d3718bbdccf389e75bcb873324ee8d16 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.vfl 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.vfl 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.vfl 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b51aa3402381cf6526c3c5bed67552c85526eb4c4e6fa7d2 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.m5y 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b51aa3402381cf6526c3c5bed67552c85526eb4c4e6fa7d2 2 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b51aa3402381cf6526c3c5bed67552c85526eb4c4e6fa7d2 2 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b51aa3402381cf6526c3c5bed67552c85526eb4c4e6fa7d2 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.m5y 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.m5y 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.m5y 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1eee4a9234efab720d260d8798c9c42382fbb70d93bea30f 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zyq 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1eee4a9234efab720d260d8798c9c42382fbb70d93bea30f 2 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1eee4a9234efab720d260d8798c9c42382fbb70d93bea30f 2 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1eee4a9234efab720d260d8798c9c42382fbb70d93bea30f 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:06.192 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zyq 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zyq 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.zyq 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=279c50d3b067a8c9e9a795a7ef17a209 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Kyw 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 279c50d3b067a8c9e9a795a7ef17a209 1 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 279c50d3b067a8c9e9a795a7ef17a209 1 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=279c50d3b067a8c9e9a795a7ef17a209 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Kyw 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Kyw 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Kyw 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=28d6b7f3900f3cf7ac75dc17bb97fb41e29161d69386db49eb019266099ac8ec 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MU8 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 28d6b7f3900f3cf7ac75dc17bb97fb41e29161d69386db49eb019266099ac8ec 3 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 28d6b7f3900f3cf7ac75dc17bb97fb41e29161d69386db49eb019266099ac8ec 3 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=28d6b7f3900f3cf7ac75dc17bb97fb41e29161d69386db49eb019266099ac8ec 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MU8 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MU8 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.MU8 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2799477 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2799477 ']' 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.451 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.452 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.710 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.710 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:06.710 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2799560 /var/tmp/host.sock 00:19:06.710 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2799560 ']' 00:19:06.710 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:06.710 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.710 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:06.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:06.710 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.710 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.buT 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.buT 00:19:06.968 04:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.buT 00:19:07.226 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.xbL ]] 00:19:07.226 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xbL 00:19:07.226 04:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.226 04:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.226 04:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.226 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xbL 00:19:07.226 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xbL 00:19:07.485 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:07.485 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vfl 00:19:07.485 04:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.485 04:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.485 04:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.485 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vfl 00:19:07.485 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vfl 00:19:07.743 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.m5y ]] 00:19:07.743 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.m5y 00:19:07.743 04:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.743 04:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.743 04:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.743 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.m5y 00:19:07.743 04:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.m5y 00:19:08.001 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:08.001 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.zyq 00:19:08.001 04:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.001 04:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.001 04:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.001 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.zyq 00:19:08.001 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.zyq 00:19:08.259 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Kyw ]] 00:19:08.259 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kyw 00:19:08.259 04:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.259 04:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.259 04:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.259 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kyw 00:19:08.259 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kyw 00:19:08.517 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:08.517 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MU8 00:19:08.517 04:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.517 04:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.517 04:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.517 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.MU8 00:19:08.517 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.MU8 00:19:08.774 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:08.774 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:08.774 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.774 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.774 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.774 04:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.032 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.290 00:19:09.290 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.290 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.290 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.547 { 00:19:09.547 "cntlid": 1, 00:19:09.547 "qid": 0, 00:19:09.547 "state": "enabled", 00:19:09.547 "listen_address": { 00:19:09.547 "trtype": "TCP", 00:19:09.547 "adrfam": "IPv4", 00:19:09.547 "traddr": "10.0.0.2", 00:19:09.547 "trsvcid": "4420" 00:19:09.547 }, 00:19:09.547 "peer_address": { 00:19:09.547 "trtype": "TCP", 00:19:09.547 "adrfam": "IPv4", 00:19:09.547 "traddr": "10.0.0.1", 00:19:09.547 "trsvcid": "51948" 00:19:09.547 }, 00:19:09.547 "auth": { 00:19:09.547 "state": "completed", 00:19:09.547 "digest": "sha256", 00:19:09.547 "dhgroup": "null" 00:19:09.547 } 00:19:09.547 } 00:19:09.547 ]' 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.547 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.804 04:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:19:10.738 04:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.739 04:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.739 04:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.739 04:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.739 04:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.739 04:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.739 04:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.739 04:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.995 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.253 00:19:11.512 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.512 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.512 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.512 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.512 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.512 04:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.512 04:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.769 04:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.769 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.769 { 00:19:11.769 "cntlid": 3, 00:19:11.769 "qid": 0, 00:19:11.769 "state": "enabled", 00:19:11.769 "listen_address": { 00:19:11.769 "trtype": "TCP", 00:19:11.769 "adrfam": "IPv4", 00:19:11.769 "traddr": "10.0.0.2", 00:19:11.769 "trsvcid": "4420" 00:19:11.769 }, 00:19:11.769 "peer_address": { 00:19:11.769 "trtype": "TCP", 00:19:11.769 "adrfam": "IPv4", 00:19:11.769 "traddr": "10.0.0.1", 00:19:11.770 "trsvcid": "51982" 00:19:11.770 }, 00:19:11.770 "auth": { 00:19:11.770 "state": "completed", 00:19:11.770 "digest": "sha256", 00:19:11.770 "dhgroup": "null" 00:19:11.770 } 00:19:11.770 } 00:19:11.770 ]' 00:19:11.770 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.770 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.770 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.770 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.770 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.770 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.770 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.770 04:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.027 04:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:19:12.963 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.963 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.963 04:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.963 04:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.963 04:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.963 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.963 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:12.963 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.221 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.478 00:19:13.478 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.478 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.478 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.734 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.734 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.734 04:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.734 04:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.734 04:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.734 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.734 { 00:19:13.734 "cntlid": 5, 00:19:13.734 "qid": 0, 00:19:13.734 "state": "enabled", 00:19:13.734 "listen_address": { 00:19:13.734 "trtype": "TCP", 00:19:13.734 "adrfam": "IPv4", 00:19:13.734 "traddr": "10.0.0.2", 00:19:13.734 "trsvcid": "4420" 00:19:13.734 }, 00:19:13.734 "peer_address": { 00:19:13.734 "trtype": "TCP", 00:19:13.734 "adrfam": "IPv4", 00:19:13.734 "traddr": "10.0.0.1", 00:19:13.734 "trsvcid": "52008" 00:19:13.734 }, 00:19:13.734 "auth": { 00:19:13.734 "state": "completed", 00:19:13.734 "digest": "sha256", 00:19:13.734 "dhgroup": "null" 00:19:13.734 } 00:19:13.734 } 00:19:13.734 ]' 00:19:13.735 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.735 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.735 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.992 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:13.992 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.992 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.992 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.992 04:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.250 04:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:19:15.185 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.185 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.185 04:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.185 04:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.185 04:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.185 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.185 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.185 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.442 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:15.442 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.442 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.442 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.443 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.443 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.443 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:15.443 04:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.443 04:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.443 04:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.443 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.443 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.700 00:19:15.700 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.700 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.700 04:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.958 { 00:19:15.958 "cntlid": 7, 00:19:15.958 "qid": 0, 00:19:15.958 "state": "enabled", 00:19:15.958 "listen_address": { 00:19:15.958 "trtype": "TCP", 00:19:15.958 "adrfam": "IPv4", 00:19:15.958 "traddr": "10.0.0.2", 00:19:15.958 "trsvcid": "4420" 00:19:15.958 }, 00:19:15.958 "peer_address": { 00:19:15.958 "trtype": "TCP", 00:19:15.958 "adrfam": "IPv4", 00:19:15.958 "traddr": "10.0.0.1", 00:19:15.958 "trsvcid": "52030" 00:19:15.958 }, 00:19:15.958 "auth": { 00:19:15.958 "state": "completed", 00:19:15.958 "digest": "sha256", 00:19:15.958 "dhgroup": "null" 00:19:15.958 } 00:19:15.958 } 00:19:15.958 ]' 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:15.958 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.216 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.216 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.216 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.475 04:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:19:17.411 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.411 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.412 04:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.412 04:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.412 04:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.412 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.412 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.412 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.412 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.670 04:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.928 00:19:17.928 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.928 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.928 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.186 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.186 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.186 04:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.186 04:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.186 04:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.186 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.186 { 00:19:18.186 "cntlid": 9, 00:19:18.186 "qid": 0, 00:19:18.186 "state": "enabled", 00:19:18.186 "listen_address": { 00:19:18.186 "trtype": "TCP", 00:19:18.186 "adrfam": "IPv4", 00:19:18.186 "traddr": "10.0.0.2", 00:19:18.186 "trsvcid": "4420" 00:19:18.186 }, 00:19:18.186 "peer_address": { 00:19:18.186 "trtype": "TCP", 00:19:18.186 "adrfam": "IPv4", 00:19:18.186 "traddr": "10.0.0.1", 00:19:18.186 "trsvcid": "35074" 00:19:18.186 }, 00:19:18.186 "auth": { 00:19:18.186 "state": "completed", 00:19:18.186 "digest": "sha256", 00:19:18.186 "dhgroup": "ffdhe2048" 00:19:18.186 } 00:19:18.186 } 00:19:18.186 ]' 00:19:18.186 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.444 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.444 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.444 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.444 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.444 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.444 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.444 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.707 04:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:19:19.706 04:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.706 04:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.706 04:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.706 04:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.706 04:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.706 04:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.706 04:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:19.706 04:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.963 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.222 00:19:20.481 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.481 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.481 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.481 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.481 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.481 04:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.481 04:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.740 { 00:19:20.740 "cntlid": 11, 00:19:20.740 "qid": 0, 00:19:20.740 "state": "enabled", 00:19:20.740 "listen_address": { 00:19:20.740 "trtype": "TCP", 00:19:20.740 "adrfam": "IPv4", 00:19:20.740 "traddr": "10.0.0.2", 00:19:20.740 "trsvcid": "4420" 00:19:20.740 }, 00:19:20.740 "peer_address": { 00:19:20.740 "trtype": "TCP", 00:19:20.740 "adrfam": "IPv4", 00:19:20.740 "traddr": "10.0.0.1", 00:19:20.740 "trsvcid": "35106" 00:19:20.740 }, 00:19:20.740 "auth": { 00:19:20.740 "state": "completed", 00:19:20.740 "digest": "sha256", 00:19:20.740 "dhgroup": "ffdhe2048" 00:19:20.740 } 00:19:20.740 } 00:19:20.740 ]' 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.740 04:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.998 04:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:19:21.933 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.933 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.933 04:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.933 04:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.933 04:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.933 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.933 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.933 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.192 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:22.192 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.192 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.192 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:22.192 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.192 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.192 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.192 04:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.192 04:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.451 04:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.451 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.451 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.709 00:19:22.709 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.709 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.709 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.966 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.966 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.966 04:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.966 04:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.966 04:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.966 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.966 { 00:19:22.966 "cntlid": 13, 00:19:22.966 "qid": 0, 00:19:22.966 "state": "enabled", 00:19:22.966 "listen_address": { 00:19:22.966 "trtype": "TCP", 00:19:22.966 "adrfam": "IPv4", 00:19:22.966 "traddr": "10.0.0.2", 00:19:22.966 "trsvcid": "4420" 00:19:22.966 }, 00:19:22.966 "peer_address": { 00:19:22.966 "trtype": "TCP", 00:19:22.966 "adrfam": "IPv4", 00:19:22.966 "traddr": "10.0.0.1", 00:19:22.966 "trsvcid": "35134" 00:19:22.966 }, 00:19:22.966 "auth": { 00:19:22.966 "state": "completed", 00:19:22.966 "digest": "sha256", 00:19:22.966 "dhgroup": "ffdhe2048" 00:19:22.966 } 00:19:22.966 } 00:19:22.966 ]' 00:19:22.966 04:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.966 04:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.966 04:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.966 04:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.966 04:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.966 04:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.966 04:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.966 04:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.225 04:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.600 04:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.859 00:19:25.117 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.117 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.117 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.117 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.117 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.117 04:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.117 04:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.117 04:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.117 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.117 { 00:19:25.117 "cntlid": 15, 00:19:25.117 "qid": 0, 00:19:25.117 "state": "enabled", 00:19:25.117 "listen_address": { 00:19:25.117 "trtype": "TCP", 00:19:25.117 "adrfam": "IPv4", 00:19:25.117 "traddr": "10.0.0.2", 00:19:25.117 "trsvcid": "4420" 00:19:25.117 }, 00:19:25.118 "peer_address": { 00:19:25.118 "trtype": "TCP", 00:19:25.118 "adrfam": "IPv4", 00:19:25.118 "traddr": "10.0.0.1", 00:19:25.118 "trsvcid": "35146" 00:19:25.118 }, 00:19:25.118 "auth": { 00:19:25.118 "state": "completed", 00:19:25.118 "digest": "sha256", 00:19:25.118 "dhgroup": "ffdhe2048" 00:19:25.118 } 00:19:25.118 } 00:19:25.118 ]' 00:19:25.118 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.375 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.375 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.375 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:25.375 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.375 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.375 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.375 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.633 04:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:19:26.578 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.579 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.579 04:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.579 04:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.579 04:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.579 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.579 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.579 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.579 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.837 04:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.404 00:19:27.404 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.404 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.404 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.404 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.404 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.404 04:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.404 04:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.404 04:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.404 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.404 { 00:19:27.404 "cntlid": 17, 00:19:27.404 "qid": 0, 00:19:27.404 "state": "enabled", 00:19:27.404 "listen_address": { 00:19:27.404 "trtype": "TCP", 00:19:27.404 "adrfam": "IPv4", 00:19:27.404 "traddr": "10.0.0.2", 00:19:27.404 "trsvcid": "4420" 00:19:27.404 }, 00:19:27.404 "peer_address": { 00:19:27.404 "trtype": "TCP", 00:19:27.405 "adrfam": "IPv4", 00:19:27.405 "traddr": "10.0.0.1", 00:19:27.405 "trsvcid": "35180" 00:19:27.405 }, 00:19:27.405 "auth": { 00:19:27.405 "state": "completed", 00:19:27.405 "digest": "sha256", 00:19:27.405 "dhgroup": "ffdhe3072" 00:19:27.405 } 00:19:27.405 } 00:19:27.405 ]' 00:19:27.405 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.662 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.662 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.662 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.662 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.662 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.662 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.662 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.920 04:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:19:28.858 04:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.858 04:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.858 04:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.858 04:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.858 04:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.858 04:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.858 04:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.858 04:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.116 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.375 00:19:29.375 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.375 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.375 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.633 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.633 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.633 04:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.633 04:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.633 04:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.633 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.633 { 00:19:29.633 "cntlid": 19, 00:19:29.633 "qid": 0, 00:19:29.633 "state": "enabled", 00:19:29.633 "listen_address": { 00:19:29.633 "trtype": "TCP", 00:19:29.633 "adrfam": "IPv4", 00:19:29.633 "traddr": "10.0.0.2", 00:19:29.633 "trsvcid": "4420" 00:19:29.633 }, 00:19:29.633 "peer_address": { 00:19:29.633 "trtype": "TCP", 00:19:29.633 "adrfam": "IPv4", 00:19:29.633 "traddr": "10.0.0.1", 00:19:29.633 "trsvcid": "46804" 00:19:29.633 }, 00:19:29.633 "auth": { 00:19:29.633 "state": "completed", 00:19:29.633 "digest": "sha256", 00:19:29.633 "dhgroup": "ffdhe3072" 00:19:29.633 } 00:19:29.633 } 00:19:29.633 ]' 00:19:29.633 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.633 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.633 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.891 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.891 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.891 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.891 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.891 04:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.151 04:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:19:31.097 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.097 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.097 04:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.097 04:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.097 04:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.097 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.097 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.097 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.355 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.614 00:19:31.614 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.614 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.614 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.872 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.872 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.872 04:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.872 04:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.872 04:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.872 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.872 { 00:19:31.872 "cntlid": 21, 00:19:31.872 "qid": 0, 00:19:31.872 "state": "enabled", 00:19:31.872 "listen_address": { 00:19:31.872 "trtype": "TCP", 00:19:31.872 "adrfam": "IPv4", 00:19:31.872 "traddr": "10.0.0.2", 00:19:31.872 "trsvcid": "4420" 00:19:31.872 }, 00:19:31.872 "peer_address": { 00:19:31.872 "trtype": "TCP", 00:19:31.872 "adrfam": "IPv4", 00:19:31.872 "traddr": "10.0.0.1", 00:19:31.872 "trsvcid": "46838" 00:19:31.872 }, 00:19:31.872 "auth": { 00:19:31.872 "state": "completed", 00:19:31.872 "digest": "sha256", 00:19:31.872 "dhgroup": "ffdhe3072" 00:19:31.872 } 00:19:31.872 } 00:19:31.872 ]' 00:19:31.872 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.872 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.872 04:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.872 04:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.872 04:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.872 04:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.872 04:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.872 04:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.131 04:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:19:33.506 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.506 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.506 04:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.506 04:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.506 04:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.506 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.507 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.764 00:19:34.023 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.023 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.023 04:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.304 { 00:19:34.304 "cntlid": 23, 00:19:34.304 "qid": 0, 00:19:34.304 "state": "enabled", 00:19:34.304 "listen_address": { 00:19:34.304 "trtype": "TCP", 00:19:34.304 "adrfam": "IPv4", 00:19:34.304 "traddr": "10.0.0.2", 00:19:34.304 "trsvcid": "4420" 00:19:34.304 }, 00:19:34.304 "peer_address": { 00:19:34.304 "trtype": "TCP", 00:19:34.304 "adrfam": "IPv4", 00:19:34.304 "traddr": "10.0.0.1", 00:19:34.304 "trsvcid": "46872" 00:19:34.304 }, 00:19:34.304 "auth": { 00:19:34.304 "state": "completed", 00:19:34.304 "digest": "sha256", 00:19:34.304 "dhgroup": "ffdhe3072" 00:19:34.304 } 00:19:34.304 } 00:19:34.304 ]' 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.304 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.561 04:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:19:35.499 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.499 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.499 04:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.499 04:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 04:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.499 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.499 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.499 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.757 04:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.325 00:19:36.325 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.325 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.325 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.584 { 00:19:36.584 "cntlid": 25, 00:19:36.584 "qid": 0, 00:19:36.584 "state": "enabled", 00:19:36.584 "listen_address": { 00:19:36.584 "trtype": "TCP", 00:19:36.584 "adrfam": "IPv4", 00:19:36.584 "traddr": "10.0.0.2", 00:19:36.584 "trsvcid": "4420" 00:19:36.584 }, 00:19:36.584 "peer_address": { 00:19:36.584 "trtype": "TCP", 00:19:36.584 "adrfam": "IPv4", 00:19:36.584 "traddr": "10.0.0.1", 00:19:36.584 "trsvcid": "46904" 00:19:36.584 }, 00:19:36.584 "auth": { 00:19:36.584 "state": "completed", 00:19:36.584 "digest": "sha256", 00:19:36.584 "dhgroup": "ffdhe4096" 00:19:36.584 } 00:19:36.584 } 00:19:36.584 ]' 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.584 04:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.843 04:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:19:38.221 04:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.221 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.479 00:19:38.479 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.479 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.479 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.737 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.737 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.737 04:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.737 04:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.737 04:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.737 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.737 { 00:19:38.737 "cntlid": 27, 00:19:38.737 "qid": 0, 00:19:38.737 "state": "enabled", 00:19:38.737 "listen_address": { 00:19:38.737 "trtype": "TCP", 00:19:38.737 "adrfam": "IPv4", 00:19:38.737 "traddr": "10.0.0.2", 00:19:38.737 "trsvcid": "4420" 00:19:38.737 }, 00:19:38.737 "peer_address": { 00:19:38.737 "trtype": "TCP", 00:19:38.737 "adrfam": "IPv4", 00:19:38.737 "traddr": "10.0.0.1", 00:19:38.737 "trsvcid": "42732" 00:19:38.737 }, 00:19:38.737 "auth": { 00:19:38.737 "state": "completed", 00:19:38.737 "digest": "sha256", 00:19:38.737 "dhgroup": "ffdhe4096" 00:19:38.737 } 00:19:38.737 } 00:19:38.737 ]' 00:19:38.737 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.995 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.995 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.995 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.995 04:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.995 04:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.995 04:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.995 04:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.254 04:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:19:40.188 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.188 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.188 04:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.188 04:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.188 04:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.188 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.188 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.188 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.446 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.704 00:19:40.704 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.704 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.704 04:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.962 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.962 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.962 04:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.962 04:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.219 { 00:19:41.219 "cntlid": 29, 00:19:41.219 "qid": 0, 00:19:41.219 "state": "enabled", 00:19:41.219 "listen_address": { 00:19:41.219 "trtype": "TCP", 00:19:41.219 "adrfam": "IPv4", 00:19:41.219 "traddr": "10.0.0.2", 00:19:41.219 "trsvcid": "4420" 00:19:41.219 }, 00:19:41.219 "peer_address": { 00:19:41.219 "trtype": "TCP", 00:19:41.219 "adrfam": "IPv4", 00:19:41.219 "traddr": "10.0.0.1", 00:19:41.219 "trsvcid": "42768" 00:19:41.219 }, 00:19:41.219 "auth": { 00:19:41.219 "state": "completed", 00:19:41.219 "digest": "sha256", 00:19:41.219 "dhgroup": "ffdhe4096" 00:19:41.219 } 00:19:41.219 } 00:19:41.219 ]' 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.219 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.479 04:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:19:42.416 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.416 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.416 04:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.416 04:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.416 04:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.416 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.416 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.416 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.674 04:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.251 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.252 { 00:19:43.252 "cntlid": 31, 00:19:43.252 "qid": 0, 00:19:43.252 "state": "enabled", 00:19:43.252 "listen_address": { 00:19:43.252 "trtype": "TCP", 00:19:43.252 "adrfam": "IPv4", 00:19:43.252 "traddr": "10.0.0.2", 00:19:43.252 "trsvcid": "4420" 00:19:43.252 }, 00:19:43.252 "peer_address": { 00:19:43.252 "trtype": "TCP", 00:19:43.252 "adrfam": "IPv4", 00:19:43.252 "traddr": "10.0.0.1", 00:19:43.252 "trsvcid": "42802" 00:19:43.252 }, 00:19:43.252 "auth": { 00:19:43.252 "state": "completed", 00:19:43.252 "digest": "sha256", 00:19:43.252 "dhgroup": "ffdhe4096" 00:19:43.252 } 00:19:43.252 } 00:19:43.252 ]' 00:19:43.252 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.510 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.510 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.510 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.510 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.510 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.510 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.510 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.767 04:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:19:44.702 04:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.702 04:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.702 04:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.702 04:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.702 04:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.702 04:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.702 04:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.702 04:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.702 04:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.960 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.527 00:19:45.528 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.528 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.528 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.786 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.786 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.786 04:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.786 04:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.786 04:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.786 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.786 { 00:19:45.786 "cntlid": 33, 00:19:45.786 "qid": 0, 00:19:45.786 "state": "enabled", 00:19:45.786 "listen_address": { 00:19:45.786 "trtype": "TCP", 00:19:45.786 "adrfam": "IPv4", 00:19:45.786 "traddr": "10.0.0.2", 00:19:45.786 "trsvcid": "4420" 00:19:45.786 }, 00:19:45.786 "peer_address": { 00:19:45.786 "trtype": "TCP", 00:19:45.786 "adrfam": "IPv4", 00:19:45.786 "traddr": "10.0.0.1", 00:19:45.786 "trsvcid": "42836" 00:19:45.786 }, 00:19:45.786 "auth": { 00:19:45.786 "state": "completed", 00:19:45.786 "digest": "sha256", 00:19:45.786 "dhgroup": "ffdhe6144" 00:19:45.786 } 00:19:45.786 } 00:19:45.786 ]' 00:19:45.786 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.787 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.787 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.787 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.787 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.046 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.046 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.046 04:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.046 04:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:19:47.423 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.423 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.423 04:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.423 04:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.423 04:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.423 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.423 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.424 04:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.015 00:19:48.015 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.015 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.015 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.286 { 00:19:48.286 "cntlid": 35, 00:19:48.286 "qid": 0, 00:19:48.286 "state": "enabled", 00:19:48.286 "listen_address": { 00:19:48.286 "trtype": "TCP", 00:19:48.286 "adrfam": "IPv4", 00:19:48.286 "traddr": "10.0.0.2", 00:19:48.286 "trsvcid": "4420" 00:19:48.286 }, 00:19:48.286 "peer_address": { 00:19:48.286 "trtype": "TCP", 00:19:48.286 "adrfam": "IPv4", 00:19:48.286 "traddr": "10.0.0.1", 00:19:48.286 "trsvcid": "42272" 00:19:48.286 }, 00:19:48.286 "auth": { 00:19:48.286 "state": "completed", 00:19:48.286 "digest": "sha256", 00:19:48.286 "dhgroup": "ffdhe6144" 00:19:48.286 } 00:19:48.286 } 00:19:48.286 ]' 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.286 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.544 04:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:19:49.479 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.479 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.479 04:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.479 04:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.479 04:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.479 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.479 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.479 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.737 04:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.303 00:19:50.303 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.303 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.303 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.566 { 00:19:50.566 "cntlid": 37, 00:19:50.566 "qid": 0, 00:19:50.566 "state": "enabled", 00:19:50.566 "listen_address": { 00:19:50.566 "trtype": "TCP", 00:19:50.566 "adrfam": "IPv4", 00:19:50.566 "traddr": "10.0.0.2", 00:19:50.566 "trsvcid": "4420" 00:19:50.566 }, 00:19:50.566 "peer_address": { 00:19:50.566 "trtype": "TCP", 00:19:50.566 "adrfam": "IPv4", 00:19:50.566 "traddr": "10.0.0.1", 00:19:50.566 "trsvcid": "42290" 00:19:50.566 }, 00:19:50.566 "auth": { 00:19:50.566 "state": "completed", 00:19:50.566 "digest": "sha256", 00:19:50.566 "dhgroup": "ffdhe6144" 00:19:50.566 } 00:19:50.566 } 00:19:50.566 ]' 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.566 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.827 04:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:19:52.202 04:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.202 04:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.202 04:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.202 04:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.202 04:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.202 04:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.202 04:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:52.202 04:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.202 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.769 00:19:52.769 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.769 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.769 04:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.027 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.027 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.027 04:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.028 { 00:19:53.028 "cntlid": 39, 00:19:53.028 "qid": 0, 00:19:53.028 "state": "enabled", 00:19:53.028 "listen_address": { 00:19:53.028 "trtype": "TCP", 00:19:53.028 "adrfam": "IPv4", 00:19:53.028 "traddr": "10.0.0.2", 00:19:53.028 "trsvcid": "4420" 00:19:53.028 }, 00:19:53.028 "peer_address": { 00:19:53.028 "trtype": "TCP", 00:19:53.028 "adrfam": "IPv4", 00:19:53.028 "traddr": "10.0.0.1", 00:19:53.028 "trsvcid": "42314" 00:19:53.028 }, 00:19:53.028 "auth": { 00:19:53.028 "state": "completed", 00:19:53.028 "digest": "sha256", 00:19:53.028 "dhgroup": "ffdhe6144" 00:19:53.028 } 00:19:53.028 } 00:19:53.028 ]' 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.028 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.286 04:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:19:54.219 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.219 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.219 04:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.219 04:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.219 04:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.219 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.219 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.219 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.219 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.476 04:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.477 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.477 04:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.411 00:19:55.411 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.411 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.411 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.669 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.669 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.669 04:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.670 04:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.670 04:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.670 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.670 { 00:19:55.670 "cntlid": 41, 00:19:55.670 "qid": 0, 00:19:55.670 "state": "enabled", 00:19:55.670 "listen_address": { 00:19:55.670 "trtype": "TCP", 00:19:55.670 "adrfam": "IPv4", 00:19:55.670 "traddr": "10.0.0.2", 00:19:55.670 "trsvcid": "4420" 00:19:55.670 }, 00:19:55.670 "peer_address": { 00:19:55.670 "trtype": "TCP", 00:19:55.670 "adrfam": "IPv4", 00:19:55.670 "traddr": "10.0.0.1", 00:19:55.670 "trsvcid": "42338" 00:19:55.670 }, 00:19:55.670 "auth": { 00:19:55.670 "state": "completed", 00:19:55.670 "digest": "sha256", 00:19:55.670 "dhgroup": "ffdhe8192" 00:19:55.670 } 00:19:55.670 } 00:19:55.670 ]' 00:19:55.670 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.670 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.670 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.670 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.670 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.928 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.928 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.928 04:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.186 04:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:19:57.119 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.119 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.119 04:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.119 04:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.119 04:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.119 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.119 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.119 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.377 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.378 04:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.313 00:19:58.313 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.313 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.313 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.572 { 00:19:58.572 "cntlid": 43, 00:19:58.572 "qid": 0, 00:19:58.572 "state": "enabled", 00:19:58.572 "listen_address": { 00:19:58.572 "trtype": "TCP", 00:19:58.572 "adrfam": "IPv4", 00:19:58.572 "traddr": "10.0.0.2", 00:19:58.572 "trsvcid": "4420" 00:19:58.572 }, 00:19:58.572 "peer_address": { 00:19:58.572 "trtype": "TCP", 00:19:58.572 "adrfam": "IPv4", 00:19:58.572 "traddr": "10.0.0.1", 00:19:58.572 "trsvcid": "55794" 00:19:58.572 }, 00:19:58.572 "auth": { 00:19:58.572 "state": "completed", 00:19:58.572 "digest": "sha256", 00:19:58.572 "dhgroup": "ffdhe8192" 00:19:58.572 } 00:19:58.572 } 00:19:58.572 ]' 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.572 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.830 04:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:19:59.766 04:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.766 04:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.766 04:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.766 04:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.766 04:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.766 04:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.766 04:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.766 04:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.024 04:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.959 00:20:00.959 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.959 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.959 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.215 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.215 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.215 04:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.215 04:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.215 04:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.215 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.215 { 00:20:01.215 "cntlid": 45, 00:20:01.215 "qid": 0, 00:20:01.215 "state": "enabled", 00:20:01.215 "listen_address": { 00:20:01.215 "trtype": "TCP", 00:20:01.215 "adrfam": "IPv4", 00:20:01.215 "traddr": "10.0.0.2", 00:20:01.215 "trsvcid": "4420" 00:20:01.215 }, 00:20:01.215 "peer_address": { 00:20:01.215 "trtype": "TCP", 00:20:01.215 "adrfam": "IPv4", 00:20:01.215 "traddr": "10.0.0.1", 00:20:01.215 "trsvcid": "55808" 00:20:01.215 }, 00:20:01.215 "auth": { 00:20:01.215 "state": "completed", 00:20:01.215 "digest": "sha256", 00:20:01.215 "dhgroup": "ffdhe8192" 00:20:01.215 } 00:20:01.215 } 00:20:01.215 ]' 00:20:01.215 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.215 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.215 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.471 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.471 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.471 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.471 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.471 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.728 04:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:20:02.695 04:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.695 04:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.695 04:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.695 04:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.695 04:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.695 04:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.695 04:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.695 04:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.951 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.883 00:20:03.883 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.883 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.883 04:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.141 { 00:20:04.141 "cntlid": 47, 00:20:04.141 "qid": 0, 00:20:04.141 "state": "enabled", 00:20:04.141 "listen_address": { 00:20:04.141 "trtype": "TCP", 00:20:04.141 "adrfam": "IPv4", 00:20:04.141 "traddr": "10.0.0.2", 00:20:04.141 "trsvcid": "4420" 00:20:04.141 }, 00:20:04.141 "peer_address": { 00:20:04.141 "trtype": "TCP", 00:20:04.141 "adrfam": "IPv4", 00:20:04.141 "traddr": "10.0.0.1", 00:20:04.141 "trsvcid": "55828" 00:20:04.141 }, 00:20:04.141 "auth": { 00:20:04.141 "state": "completed", 00:20:04.141 "digest": "sha256", 00:20:04.141 "dhgroup": "ffdhe8192" 00:20:04.141 } 00:20:04.141 } 00:20:04.141 ]' 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.141 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.398 04:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.333 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.592 04:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.160 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.160 { 00:20:06.160 "cntlid": 49, 00:20:06.160 "qid": 0, 00:20:06.160 "state": "enabled", 00:20:06.160 "listen_address": { 00:20:06.160 "trtype": "TCP", 00:20:06.160 "adrfam": "IPv4", 00:20:06.160 "traddr": "10.0.0.2", 00:20:06.160 "trsvcid": "4420" 00:20:06.160 }, 00:20:06.160 "peer_address": { 00:20:06.160 "trtype": "TCP", 00:20:06.160 "adrfam": "IPv4", 00:20:06.160 "traddr": "10.0.0.1", 00:20:06.160 "trsvcid": "55844" 00:20:06.160 }, 00:20:06.160 "auth": { 00:20:06.160 "state": "completed", 00:20:06.160 "digest": "sha384", 00:20:06.160 "dhgroup": "null" 00:20:06.160 } 00:20:06.160 } 00:20:06.160 ]' 00:20:06.160 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.418 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.418 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.418 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:06.418 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.418 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.418 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.418 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.677 04:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:20:07.615 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.615 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.615 04:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.615 04:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.615 04:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.615 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.615 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.615 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.874 04:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.134 00:20:08.394 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.394 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.394 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.394 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.394 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.394 04:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.394 04:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.653 { 00:20:08.653 "cntlid": 51, 00:20:08.653 "qid": 0, 00:20:08.653 "state": "enabled", 00:20:08.653 "listen_address": { 00:20:08.653 "trtype": "TCP", 00:20:08.653 "adrfam": "IPv4", 00:20:08.653 "traddr": "10.0.0.2", 00:20:08.653 "trsvcid": "4420" 00:20:08.653 }, 00:20:08.653 "peer_address": { 00:20:08.653 "trtype": "TCP", 00:20:08.653 "adrfam": "IPv4", 00:20:08.653 "traddr": "10.0.0.1", 00:20:08.653 "trsvcid": "57506" 00:20:08.653 }, 00:20:08.653 "auth": { 00:20:08.653 "state": "completed", 00:20:08.653 "digest": "sha384", 00:20:08.653 "dhgroup": "null" 00:20:08.653 } 00:20:08.653 } 00:20:08.653 ]' 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.653 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.911 04:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:20:09.848 04:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.849 04:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.849 04:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.849 04:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.849 04:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.849 04:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.849 04:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.849 04:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.107 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.108 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.365 00:20:10.365 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.365 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.366 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.624 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.624 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.624 04:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.624 04:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.624 04:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.624 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.624 { 00:20:10.624 "cntlid": 53, 00:20:10.624 "qid": 0, 00:20:10.624 "state": "enabled", 00:20:10.624 "listen_address": { 00:20:10.624 "trtype": "TCP", 00:20:10.624 "adrfam": "IPv4", 00:20:10.624 "traddr": "10.0.0.2", 00:20:10.624 "trsvcid": "4420" 00:20:10.624 }, 00:20:10.624 "peer_address": { 00:20:10.624 "trtype": "TCP", 00:20:10.624 "adrfam": "IPv4", 00:20:10.624 "traddr": "10.0.0.1", 00:20:10.624 "trsvcid": "57532" 00:20:10.624 }, 00:20:10.624 "auth": { 00:20:10.624 "state": "completed", 00:20:10.624 "digest": "sha384", 00:20:10.624 "dhgroup": "null" 00:20:10.624 } 00:20:10.624 } 00:20:10.624 ]' 00:20:10.624 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.882 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.882 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.882 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:10.882 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.882 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.882 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.882 04:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.141 04:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:20:12.072 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.073 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.073 04:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.073 04:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.073 04:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.073 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.073 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.073 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.330 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.587 00:20:12.587 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.587 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.587 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.845 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.845 04:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.845 04:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.845 04:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.845 04:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.845 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.845 { 00:20:12.845 "cntlid": 55, 00:20:12.845 "qid": 0, 00:20:12.845 "state": "enabled", 00:20:12.845 "listen_address": { 00:20:12.845 "trtype": "TCP", 00:20:12.845 "adrfam": "IPv4", 00:20:12.845 "traddr": "10.0.0.2", 00:20:12.845 "trsvcid": "4420" 00:20:12.845 }, 00:20:12.845 "peer_address": { 00:20:12.845 "trtype": "TCP", 00:20:12.845 "adrfam": "IPv4", 00:20:12.845 "traddr": "10.0.0.1", 00:20:12.845 "trsvcid": "57566" 00:20:12.845 }, 00:20:12.845 "auth": { 00:20:12.845 "state": "completed", 00:20:12.845 "digest": "sha384", 00:20:12.845 "dhgroup": "null" 00:20:12.845 } 00:20:12.845 } 00:20:12.845 ]' 00:20:12.845 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.103 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.103 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.103 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:13.103 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.103 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.103 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.103 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.360 04:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:20:14.293 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.293 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.293 04:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.293 04:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.293 04:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.293 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.293 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.293 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.293 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.551 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.809 00:20:14.809 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.809 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.809 04:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.066 { 00:20:15.066 "cntlid": 57, 00:20:15.066 "qid": 0, 00:20:15.066 "state": "enabled", 00:20:15.066 "listen_address": { 00:20:15.066 "trtype": "TCP", 00:20:15.066 "adrfam": "IPv4", 00:20:15.066 "traddr": "10.0.0.2", 00:20:15.066 "trsvcid": "4420" 00:20:15.066 }, 00:20:15.066 "peer_address": { 00:20:15.066 "trtype": "TCP", 00:20:15.066 "adrfam": "IPv4", 00:20:15.066 "traddr": "10.0.0.1", 00:20:15.066 "trsvcid": "57580" 00:20:15.066 }, 00:20:15.066 "auth": { 00:20:15.066 "state": "completed", 00:20:15.066 "digest": "sha384", 00:20:15.066 "dhgroup": "ffdhe2048" 00:20:15.066 } 00:20:15.066 } 00:20:15.066 ]' 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.066 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.324 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.324 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.324 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.581 04:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.542 04:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.800 04:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.800 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.800 04:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.058 00:20:17.058 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.058 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.058 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.315 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.315 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.315 04:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.315 04:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.315 04:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.315 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.315 { 00:20:17.315 "cntlid": 59, 00:20:17.315 "qid": 0, 00:20:17.315 "state": "enabled", 00:20:17.315 "listen_address": { 00:20:17.315 "trtype": "TCP", 00:20:17.315 "adrfam": "IPv4", 00:20:17.315 "traddr": "10.0.0.2", 00:20:17.315 "trsvcid": "4420" 00:20:17.315 }, 00:20:17.315 "peer_address": { 00:20:17.315 "trtype": "TCP", 00:20:17.315 "adrfam": "IPv4", 00:20:17.315 "traddr": "10.0.0.1", 00:20:17.315 "trsvcid": "57616" 00:20:17.315 }, 00:20:17.315 "auth": { 00:20:17.315 "state": "completed", 00:20:17.315 "digest": "sha384", 00:20:17.315 "dhgroup": "ffdhe2048" 00:20:17.315 } 00:20:17.316 } 00:20:17.316 ]' 00:20:17.316 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.316 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.316 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.316 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.316 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.316 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.316 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.316 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.573 04:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:20:18.506 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.506 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.506 04:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.506 04:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.506 04:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.506 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.506 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.506 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.764 04:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.328 00:20:19.328 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.328 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.328 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.328 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.329 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.329 04:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.329 04:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.329 04:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.329 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.329 { 00:20:19.329 "cntlid": 61, 00:20:19.329 "qid": 0, 00:20:19.329 "state": "enabled", 00:20:19.329 "listen_address": { 00:20:19.329 "trtype": "TCP", 00:20:19.329 "adrfam": "IPv4", 00:20:19.329 "traddr": "10.0.0.2", 00:20:19.329 "trsvcid": "4420" 00:20:19.329 }, 00:20:19.329 "peer_address": { 00:20:19.329 "trtype": "TCP", 00:20:19.329 "adrfam": "IPv4", 00:20:19.329 "traddr": "10.0.0.1", 00:20:19.329 "trsvcid": "44048" 00:20:19.329 }, 00:20:19.329 "auth": { 00:20:19.329 "state": "completed", 00:20:19.329 "digest": "sha384", 00:20:19.329 "dhgroup": "ffdhe2048" 00:20:19.329 } 00:20:19.329 } 00:20:19.329 ]' 00:20:19.329 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.586 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.586 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.586 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.586 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.586 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.586 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.586 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.844 04:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:20:20.776 04:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.776 04:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.776 04:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.776 04:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.776 04:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.776 04:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.776 04:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.776 04:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.033 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.290 00:20:21.290 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.290 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.290 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.548 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.548 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.548 04:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.548 04:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.548 04:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.548 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.548 { 00:20:21.548 "cntlid": 63, 00:20:21.548 "qid": 0, 00:20:21.548 "state": "enabled", 00:20:21.548 "listen_address": { 00:20:21.548 "trtype": "TCP", 00:20:21.548 "adrfam": "IPv4", 00:20:21.548 "traddr": "10.0.0.2", 00:20:21.548 "trsvcid": "4420" 00:20:21.548 }, 00:20:21.548 "peer_address": { 00:20:21.548 "trtype": "TCP", 00:20:21.548 "adrfam": "IPv4", 00:20:21.548 "traddr": "10.0.0.1", 00:20:21.548 "trsvcid": "44076" 00:20:21.548 }, 00:20:21.548 "auth": { 00:20:21.548 "state": "completed", 00:20:21.548 "digest": "sha384", 00:20:21.548 "dhgroup": "ffdhe2048" 00:20:21.548 } 00:20:21.548 } 00:20:21.548 ]' 00:20:21.548 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.548 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.548 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.806 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.806 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.806 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.806 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.806 04:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.064 04:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:20:22.997 04:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.997 04:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.997 04:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.997 04:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.997 04:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.997 04:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.997 04:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.997 04:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.997 04:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.255 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.512 00:20:23.512 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.512 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.512 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.770 { 00:20:23.770 "cntlid": 65, 00:20:23.770 "qid": 0, 00:20:23.770 "state": "enabled", 00:20:23.770 "listen_address": { 00:20:23.770 "trtype": "TCP", 00:20:23.770 "adrfam": "IPv4", 00:20:23.770 "traddr": "10.0.0.2", 00:20:23.770 "trsvcid": "4420" 00:20:23.770 }, 00:20:23.770 "peer_address": { 00:20:23.770 "trtype": "TCP", 00:20:23.770 "adrfam": "IPv4", 00:20:23.770 "traddr": "10.0.0.1", 00:20:23.770 "trsvcid": "44086" 00:20:23.770 }, 00:20:23.770 "auth": { 00:20:23.770 "state": "completed", 00:20:23.770 "digest": "sha384", 00:20:23.770 "dhgroup": "ffdhe3072" 00:20:23.770 } 00:20:23.770 } 00:20:23.770 ]' 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.770 04:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.027 04:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:20:24.958 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.215 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.215 04:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.215 04:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.215 04:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.215 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.215 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.215 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.473 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.730 00:20:25.730 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.730 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.730 04:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.987 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.987 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.987 04:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.987 04:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.987 04:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.987 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.987 { 00:20:25.987 "cntlid": 67, 00:20:25.987 "qid": 0, 00:20:25.987 "state": "enabled", 00:20:25.987 "listen_address": { 00:20:25.987 "trtype": "TCP", 00:20:25.987 "adrfam": "IPv4", 00:20:25.987 "traddr": "10.0.0.2", 00:20:25.987 "trsvcid": "4420" 00:20:25.987 }, 00:20:25.987 "peer_address": { 00:20:25.987 "trtype": "TCP", 00:20:25.987 "adrfam": "IPv4", 00:20:25.987 "traddr": "10.0.0.1", 00:20:25.987 "trsvcid": "44102" 00:20:25.987 }, 00:20:25.987 "auth": { 00:20:25.987 "state": "completed", 00:20:25.987 "digest": "sha384", 00:20:25.987 "dhgroup": "ffdhe3072" 00:20:25.987 } 00:20:25.987 } 00:20:25.987 ]' 00:20:25.987 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.987 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.987 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.245 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.245 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.245 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.245 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.245 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.502 04:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:20:27.435 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.435 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.435 04:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.435 04:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.435 04:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.435 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.435 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.435 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.693 04:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.257 00:20:28.257 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.257 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.257 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.515 { 00:20:28.515 "cntlid": 69, 00:20:28.515 "qid": 0, 00:20:28.515 "state": "enabled", 00:20:28.515 "listen_address": { 00:20:28.515 "trtype": "TCP", 00:20:28.515 "adrfam": "IPv4", 00:20:28.515 "traddr": "10.0.0.2", 00:20:28.515 "trsvcid": "4420" 00:20:28.515 }, 00:20:28.515 "peer_address": { 00:20:28.515 "trtype": "TCP", 00:20:28.515 "adrfam": "IPv4", 00:20:28.515 "traddr": "10.0.0.1", 00:20:28.515 "trsvcid": "38850" 00:20:28.515 }, 00:20:28.515 "auth": { 00:20:28.515 "state": "completed", 00:20:28.515 "digest": "sha384", 00:20:28.515 "dhgroup": "ffdhe3072" 00:20:28.515 } 00:20:28.515 } 00:20:28.515 ]' 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.515 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.773 04:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:20:29.712 04:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.712 04:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.712 04:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.712 04:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.712 04:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.712 04:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.712 04:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.712 04:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.019 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.278 00:20:30.278 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.278 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.278 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.541 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.541 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.542 04:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.542 04:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.542 04:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.542 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.542 { 00:20:30.542 "cntlid": 71, 00:20:30.542 "qid": 0, 00:20:30.542 "state": "enabled", 00:20:30.542 "listen_address": { 00:20:30.542 "trtype": "TCP", 00:20:30.542 "adrfam": "IPv4", 00:20:30.542 "traddr": "10.0.0.2", 00:20:30.542 "trsvcid": "4420" 00:20:30.542 }, 00:20:30.542 "peer_address": { 00:20:30.542 "trtype": "TCP", 00:20:30.542 "adrfam": "IPv4", 00:20:30.542 "traddr": "10.0.0.1", 00:20:30.542 "trsvcid": "38866" 00:20:30.542 }, 00:20:30.542 "auth": { 00:20:30.542 "state": "completed", 00:20:30.542 "digest": "sha384", 00:20:30.542 "dhgroup": "ffdhe3072" 00:20:30.542 } 00:20:30.542 } 00:20:30.542 ]' 00:20:30.542 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.542 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.542 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.542 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.542 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.801 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.801 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.801 04:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.058 04:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:20:31.992 04:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.992 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.992 04:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.992 04:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.992 04:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.992 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.992 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.992 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.992 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.249 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.250 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.508 00:20:32.508 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.508 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.508 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.766 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.766 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.766 04:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.766 04:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.766 04:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.766 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.766 { 00:20:32.766 "cntlid": 73, 00:20:32.766 "qid": 0, 00:20:32.766 "state": "enabled", 00:20:32.766 "listen_address": { 00:20:32.766 "trtype": "TCP", 00:20:32.766 "adrfam": "IPv4", 00:20:32.766 "traddr": "10.0.0.2", 00:20:32.766 "trsvcid": "4420" 00:20:32.766 }, 00:20:32.766 "peer_address": { 00:20:32.766 "trtype": "TCP", 00:20:32.766 "adrfam": "IPv4", 00:20:32.766 "traddr": "10.0.0.1", 00:20:32.766 "trsvcid": "38896" 00:20:32.766 }, 00:20:32.766 "auth": { 00:20:32.766 "state": "completed", 00:20:32.766 "digest": "sha384", 00:20:32.766 "dhgroup": "ffdhe4096" 00:20:32.766 } 00:20:32.766 } 00:20:32.766 ]' 00:20:32.766 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.024 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.024 04:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.024 04:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.024 04:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.024 04:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.024 04:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.024 04:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.281 04:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:20:34.215 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.215 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.215 04:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.215 04:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.215 04:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.215 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.215 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.215 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.473 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.731 00:20:34.731 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.731 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.731 04:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.990 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.990 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.990 04:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.990 04:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.990 04:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.990 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.990 { 00:20:34.990 "cntlid": 75, 00:20:34.990 "qid": 0, 00:20:34.990 "state": "enabled", 00:20:34.990 "listen_address": { 00:20:34.990 "trtype": "TCP", 00:20:34.990 "adrfam": "IPv4", 00:20:34.990 "traddr": "10.0.0.2", 00:20:34.990 "trsvcid": "4420" 00:20:34.990 }, 00:20:34.990 "peer_address": { 00:20:34.990 "trtype": "TCP", 00:20:34.990 "adrfam": "IPv4", 00:20:34.990 "traddr": "10.0.0.1", 00:20:34.990 "trsvcid": "38930" 00:20:34.990 }, 00:20:34.990 "auth": { 00:20:34.990 "state": "completed", 00:20:34.990 "digest": "sha384", 00:20:34.990 "dhgroup": "ffdhe4096" 00:20:34.990 } 00:20:34.990 } 00:20:34.990 ]' 00:20:34.990 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.248 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.248 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.248 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.248 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.248 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.248 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.248 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.506 04:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:20:36.438 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.438 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.438 04:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.438 04:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.438 04:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.438 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.438 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.438 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.696 04:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.261 00:20:37.261 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.261 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.261 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.519 { 00:20:37.519 "cntlid": 77, 00:20:37.519 "qid": 0, 00:20:37.519 "state": "enabled", 00:20:37.519 "listen_address": { 00:20:37.519 "trtype": "TCP", 00:20:37.519 "adrfam": "IPv4", 00:20:37.519 "traddr": "10.0.0.2", 00:20:37.519 "trsvcid": "4420" 00:20:37.519 }, 00:20:37.519 "peer_address": { 00:20:37.519 "trtype": "TCP", 00:20:37.519 "adrfam": "IPv4", 00:20:37.519 "traddr": "10.0.0.1", 00:20:37.519 "trsvcid": "38964" 00:20:37.519 }, 00:20:37.519 "auth": { 00:20:37.519 "state": "completed", 00:20:37.519 "digest": "sha384", 00:20:37.519 "dhgroup": "ffdhe4096" 00:20:37.519 } 00:20:37.519 } 00:20:37.519 ]' 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.519 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.777 04:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:20:38.709 04:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.709 04:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.709 04:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.709 04:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.709 04:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.709 04:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.709 04:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.709 04:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.967 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:38.967 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.967 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.967 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.967 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.967 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.967 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.967 04:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.967 04:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.225 04:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.225 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.225 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.482 00:20:39.482 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.482 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.482 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.738 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.739 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.739 04:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.739 04:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.739 04:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.739 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.739 { 00:20:39.739 "cntlid": 79, 00:20:39.739 "qid": 0, 00:20:39.739 "state": "enabled", 00:20:39.739 "listen_address": { 00:20:39.739 "trtype": "TCP", 00:20:39.739 "adrfam": "IPv4", 00:20:39.739 "traddr": "10.0.0.2", 00:20:39.739 "trsvcid": "4420" 00:20:39.739 }, 00:20:39.739 "peer_address": { 00:20:39.739 "trtype": "TCP", 00:20:39.739 "adrfam": "IPv4", 00:20:39.739 "traddr": "10.0.0.1", 00:20:39.739 "trsvcid": "47880" 00:20:39.739 }, 00:20:39.739 "auth": { 00:20:39.739 "state": "completed", 00:20:39.739 "digest": "sha384", 00:20:39.739 "dhgroup": "ffdhe4096" 00:20:39.739 } 00:20:39.739 } 00:20:39.739 ]' 00:20:39.739 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.739 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.739 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.995 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.995 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.995 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.995 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.995 04:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.252 04:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:20:41.180 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.181 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.181 04:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.181 04:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.181 04:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.181 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.181 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.181 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.181 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.437 04:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.438 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.438 04:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.002 00:20:42.002 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.002 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.003 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.261 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.261 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.261 04:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.261 04:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.261 04:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.261 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.261 { 00:20:42.261 "cntlid": 81, 00:20:42.261 "qid": 0, 00:20:42.261 "state": "enabled", 00:20:42.261 "listen_address": { 00:20:42.261 "trtype": "TCP", 00:20:42.261 "adrfam": "IPv4", 00:20:42.261 "traddr": "10.0.0.2", 00:20:42.261 "trsvcid": "4420" 00:20:42.261 }, 00:20:42.261 "peer_address": { 00:20:42.261 "trtype": "TCP", 00:20:42.261 "adrfam": "IPv4", 00:20:42.261 "traddr": "10.0.0.1", 00:20:42.261 "trsvcid": "47920" 00:20:42.261 }, 00:20:42.261 "auth": { 00:20:42.261 "state": "completed", 00:20:42.261 "digest": "sha384", 00:20:42.261 "dhgroup": "ffdhe6144" 00:20:42.261 } 00:20:42.261 } 00:20:42.261 ]' 00:20:42.261 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.261 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.261 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.518 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.518 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.518 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.518 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.518 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.776 04:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:20:43.744 04:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.744 04:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.744 04:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.744 04:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.744 04:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.744 04:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.744 04:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.744 04:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.002 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:44.002 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.002 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.002 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:44.002 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:44.002 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.002 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.002 04:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.003 04:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.003 04:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.003 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.003 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.566 00:20:44.566 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.566 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.566 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.824 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.824 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.824 04:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.824 04:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.824 04:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.824 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.824 { 00:20:44.824 "cntlid": 83, 00:20:44.824 "qid": 0, 00:20:44.824 "state": "enabled", 00:20:44.824 "listen_address": { 00:20:44.824 "trtype": "TCP", 00:20:44.824 "adrfam": "IPv4", 00:20:44.824 "traddr": "10.0.0.2", 00:20:44.824 "trsvcid": "4420" 00:20:44.825 }, 00:20:44.825 "peer_address": { 00:20:44.825 "trtype": "TCP", 00:20:44.825 "adrfam": "IPv4", 00:20:44.825 "traddr": "10.0.0.1", 00:20:44.825 "trsvcid": "47948" 00:20:44.825 }, 00:20:44.825 "auth": { 00:20:44.825 "state": "completed", 00:20:44.825 "digest": "sha384", 00:20:44.825 "dhgroup": "ffdhe6144" 00:20:44.825 } 00:20:44.825 } 00:20:44.825 ]' 00:20:44.825 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.825 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.825 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.825 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.825 04:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.083 04:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.083 04:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.083 04:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.340 04:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:20:46.272 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.272 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.272 04:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.272 04:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.272 04:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.272 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.272 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.272 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.530 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.531 04:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.094 00:20:47.094 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.094 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.095 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.351 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.351 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.351 04:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.351 04:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.352 { 00:20:47.352 "cntlid": 85, 00:20:47.352 "qid": 0, 00:20:47.352 "state": "enabled", 00:20:47.352 "listen_address": { 00:20:47.352 "trtype": "TCP", 00:20:47.352 "adrfam": "IPv4", 00:20:47.352 "traddr": "10.0.0.2", 00:20:47.352 "trsvcid": "4420" 00:20:47.352 }, 00:20:47.352 "peer_address": { 00:20:47.352 "trtype": "TCP", 00:20:47.352 "adrfam": "IPv4", 00:20:47.352 "traddr": "10.0.0.1", 00:20:47.352 "trsvcid": "47980" 00:20:47.352 }, 00:20:47.352 "auth": { 00:20:47.352 "state": "completed", 00:20:47.352 "digest": "sha384", 00:20:47.352 "dhgroup": "ffdhe6144" 00:20:47.352 } 00:20:47.352 } 00:20:47.352 ]' 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.352 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.609 04:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:20:48.562 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.562 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.562 04:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.562 04:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.562 04:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.562 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.562 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.562 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.820 04:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.385 00:20:49.385 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.385 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.385 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.642 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.642 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.642 04:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.642 04:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.642 04:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.642 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.642 { 00:20:49.642 "cntlid": 87, 00:20:49.642 "qid": 0, 00:20:49.642 "state": "enabled", 00:20:49.642 "listen_address": { 00:20:49.642 "trtype": "TCP", 00:20:49.642 "adrfam": "IPv4", 00:20:49.642 "traddr": "10.0.0.2", 00:20:49.642 "trsvcid": "4420" 00:20:49.642 }, 00:20:49.642 "peer_address": { 00:20:49.642 "trtype": "TCP", 00:20:49.642 "adrfam": "IPv4", 00:20:49.642 "traddr": "10.0.0.1", 00:20:49.642 "trsvcid": "47072" 00:20:49.642 }, 00:20:49.642 "auth": { 00:20:49.642 "state": "completed", 00:20:49.642 "digest": "sha384", 00:20:49.642 "dhgroup": "ffdhe6144" 00:20:49.642 } 00:20:49.642 } 00:20:49.642 ]' 00:20:49.642 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.642 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.642 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.900 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.900 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.900 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.900 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.900 04:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.157 04:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:20:51.090 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.090 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.090 04:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.090 04:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.090 04:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.090 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.090 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.090 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.090 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.348 04:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.349 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.349 04:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.281 00:20:52.281 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.281 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.281 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.539 { 00:20:52.539 "cntlid": 89, 00:20:52.539 "qid": 0, 00:20:52.539 "state": "enabled", 00:20:52.539 "listen_address": { 00:20:52.539 "trtype": "TCP", 00:20:52.539 "adrfam": "IPv4", 00:20:52.539 "traddr": "10.0.0.2", 00:20:52.539 "trsvcid": "4420" 00:20:52.539 }, 00:20:52.539 "peer_address": { 00:20:52.539 "trtype": "TCP", 00:20:52.539 "adrfam": "IPv4", 00:20:52.539 "traddr": "10.0.0.1", 00:20:52.539 "trsvcid": "47104" 00:20:52.539 }, 00:20:52.539 "auth": { 00:20:52.539 "state": "completed", 00:20:52.539 "digest": "sha384", 00:20:52.539 "dhgroup": "ffdhe8192" 00:20:52.539 } 00:20:52.539 } 00:20:52.539 ]' 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.539 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.797 04:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:20:53.729 04:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.729 04:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.729 04:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.729 04:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.729 04:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.729 04:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.729 04:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.729 04:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.987 04:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.919 00:20:54.919 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.919 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.919 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.176 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.176 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.176 04:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.176 04:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.176 04:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.176 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.176 { 00:20:55.176 "cntlid": 91, 00:20:55.176 "qid": 0, 00:20:55.176 "state": "enabled", 00:20:55.176 "listen_address": { 00:20:55.176 "trtype": "TCP", 00:20:55.176 "adrfam": "IPv4", 00:20:55.176 "traddr": "10.0.0.2", 00:20:55.176 "trsvcid": "4420" 00:20:55.176 }, 00:20:55.176 "peer_address": { 00:20:55.176 "trtype": "TCP", 00:20:55.176 "adrfam": "IPv4", 00:20:55.176 "traddr": "10.0.0.1", 00:20:55.176 "trsvcid": "47136" 00:20:55.176 }, 00:20:55.176 "auth": { 00:20:55.176 "state": "completed", 00:20:55.176 "digest": "sha384", 00:20:55.176 "dhgroup": "ffdhe8192" 00:20:55.176 } 00:20:55.176 } 00:20:55.176 ]' 00:20:55.176 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.433 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.433 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.433 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.433 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.433 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.433 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.433 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.691 04:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:20:56.623 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.623 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.623 04:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.623 04:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.623 04:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.623 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.623 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.623 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.883 04:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.852 00:20:57.852 04:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.852 04:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.852 04:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.852 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.852 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.852 04:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.852 04:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.110 04:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.110 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.110 { 00:20:58.110 "cntlid": 93, 00:20:58.110 "qid": 0, 00:20:58.110 "state": "enabled", 00:20:58.110 "listen_address": { 00:20:58.110 "trtype": "TCP", 00:20:58.110 "adrfam": "IPv4", 00:20:58.110 "traddr": "10.0.0.2", 00:20:58.110 "trsvcid": "4420" 00:20:58.110 }, 00:20:58.111 "peer_address": { 00:20:58.111 "trtype": "TCP", 00:20:58.111 "adrfam": "IPv4", 00:20:58.111 "traddr": "10.0.0.1", 00:20:58.111 "trsvcid": "47166" 00:20:58.111 }, 00:20:58.111 "auth": { 00:20:58.111 "state": "completed", 00:20:58.111 "digest": "sha384", 00:20:58.111 "dhgroup": "ffdhe8192" 00:20:58.111 } 00:20:58.111 } 00:20:58.111 ]' 00:20:58.111 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.111 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.111 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.111 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.111 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.111 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.111 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.111 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.369 04:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:20:59.307 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.307 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.307 04:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.307 04:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.307 04:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.307 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.307 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.307 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.565 04:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.513 00:21:00.513 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.513 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.513 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.771 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.771 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.771 04:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.771 04:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.771 04:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.771 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.771 { 00:21:00.771 "cntlid": 95, 00:21:00.771 "qid": 0, 00:21:00.771 "state": "enabled", 00:21:00.771 "listen_address": { 00:21:00.771 "trtype": "TCP", 00:21:00.771 "adrfam": "IPv4", 00:21:00.771 "traddr": "10.0.0.2", 00:21:00.771 "trsvcid": "4420" 00:21:00.771 }, 00:21:00.771 "peer_address": { 00:21:00.771 "trtype": "TCP", 00:21:00.771 "adrfam": "IPv4", 00:21:00.771 "traddr": "10.0.0.1", 00:21:00.771 "trsvcid": "43944" 00:21:00.771 }, 00:21:00.771 "auth": { 00:21:00.771 "state": "completed", 00:21:00.771 "digest": "sha384", 00:21:00.771 "dhgroup": "ffdhe8192" 00:21:00.771 } 00:21:00.771 } 00:21:00.771 ]' 00:21:00.771 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.771 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.772 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.772 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.772 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.028 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.028 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.028 04:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.286 04:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.223 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.482 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.739 00:21:02.739 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.739 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.739 04:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.998 { 00:21:02.998 "cntlid": 97, 00:21:02.998 "qid": 0, 00:21:02.998 "state": "enabled", 00:21:02.998 "listen_address": { 00:21:02.998 "trtype": "TCP", 00:21:02.998 "adrfam": "IPv4", 00:21:02.998 "traddr": "10.0.0.2", 00:21:02.998 "trsvcid": "4420" 00:21:02.998 }, 00:21:02.998 "peer_address": { 00:21:02.998 "trtype": "TCP", 00:21:02.998 "adrfam": "IPv4", 00:21:02.998 "traddr": "10.0.0.1", 00:21:02.998 "trsvcid": "43974" 00:21:02.998 }, 00:21:02.998 "auth": { 00:21:02.998 "state": "completed", 00:21:02.998 "digest": "sha512", 00:21:02.998 "dhgroup": "null" 00:21:02.998 } 00:21:02.998 } 00:21:02.998 ]' 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.998 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.566 04:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:21:04.501 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.501 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.501 04:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.501 04:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.501 04:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.501 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.501 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.501 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.762 04:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.021 00:21:05.021 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.021 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.021 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.280 { 00:21:05.280 "cntlid": 99, 00:21:05.280 "qid": 0, 00:21:05.280 "state": "enabled", 00:21:05.280 "listen_address": { 00:21:05.280 "trtype": "TCP", 00:21:05.280 "adrfam": "IPv4", 00:21:05.280 "traddr": "10.0.0.2", 00:21:05.280 "trsvcid": "4420" 00:21:05.280 }, 00:21:05.280 "peer_address": { 00:21:05.280 "trtype": "TCP", 00:21:05.280 "adrfam": "IPv4", 00:21:05.280 "traddr": "10.0.0.1", 00:21:05.280 "trsvcid": "43984" 00:21:05.280 }, 00:21:05.280 "auth": { 00:21:05.280 "state": "completed", 00:21:05.280 "digest": "sha512", 00:21:05.280 "dhgroup": "null" 00:21:05.280 } 00:21:05.280 } 00:21:05.280 ]' 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.280 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.538 04:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:21:06.472 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.473 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.473 04:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.473 04:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.473 04:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.473 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.473 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.473 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.731 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:06.731 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.731 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.731 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.731 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.731 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.732 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.732 04:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.732 04:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.732 04:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.732 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.732 04:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.991 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.250 { 00:21:07.250 "cntlid": 101, 00:21:07.250 "qid": 0, 00:21:07.250 "state": "enabled", 00:21:07.250 "listen_address": { 00:21:07.250 "trtype": "TCP", 00:21:07.250 "adrfam": "IPv4", 00:21:07.250 "traddr": "10.0.0.2", 00:21:07.250 "trsvcid": "4420" 00:21:07.250 }, 00:21:07.250 "peer_address": { 00:21:07.250 "trtype": "TCP", 00:21:07.250 "adrfam": "IPv4", 00:21:07.250 "traddr": "10.0.0.1", 00:21:07.250 "trsvcid": "44000" 00:21:07.250 }, 00:21:07.250 "auth": { 00:21:07.250 "state": "completed", 00:21:07.250 "digest": "sha512", 00:21:07.250 "dhgroup": "null" 00:21:07.250 } 00:21:07.250 } 00:21:07.250 ]' 00:21:07.250 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.508 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.508 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.508 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:07.509 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.509 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.509 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.509 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.766 04:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:21:08.700 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.700 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.700 04:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.700 04:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.700 04:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.700 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.700 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.700 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.958 04:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.217 00:21:09.217 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.217 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.217 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.475 { 00:21:09.475 "cntlid": 103, 00:21:09.475 "qid": 0, 00:21:09.475 "state": "enabled", 00:21:09.475 "listen_address": { 00:21:09.475 "trtype": "TCP", 00:21:09.475 "adrfam": "IPv4", 00:21:09.475 "traddr": "10.0.0.2", 00:21:09.475 "trsvcid": "4420" 00:21:09.475 }, 00:21:09.475 "peer_address": { 00:21:09.475 "trtype": "TCP", 00:21:09.475 "adrfam": "IPv4", 00:21:09.475 "traddr": "10.0.0.1", 00:21:09.475 "trsvcid": "45736" 00:21:09.475 }, 00:21:09.475 "auth": { 00:21:09.475 "state": "completed", 00:21:09.475 "digest": "sha512", 00:21:09.475 "dhgroup": "null" 00:21:09.475 } 00:21:09.475 } 00:21:09.475 ]' 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:09.475 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.732 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.732 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.732 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.732 04:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:21:11.110 04:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.110 04:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.110 04:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.110 04:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.110 04:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.110 04:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.110 04:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.110 04:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.110 04:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.110 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.395 00:21:11.395 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.395 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.395 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.652 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.652 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.652 04:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.652 04:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.652 04:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.652 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.652 { 00:21:11.652 "cntlid": 105, 00:21:11.652 "qid": 0, 00:21:11.652 "state": "enabled", 00:21:11.652 "listen_address": { 00:21:11.652 "trtype": "TCP", 00:21:11.652 "adrfam": "IPv4", 00:21:11.652 "traddr": "10.0.0.2", 00:21:11.652 "trsvcid": "4420" 00:21:11.652 }, 00:21:11.652 "peer_address": { 00:21:11.652 "trtype": "TCP", 00:21:11.652 "adrfam": "IPv4", 00:21:11.652 "traddr": "10.0.0.1", 00:21:11.652 "trsvcid": "45752" 00:21:11.652 }, 00:21:11.652 "auth": { 00:21:11.652 "state": "completed", 00:21:11.652 "digest": "sha512", 00:21:11.652 "dhgroup": "ffdhe2048" 00:21:11.652 } 00:21:11.652 } 00:21:11.652 ]' 00:21:11.652 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.653 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.653 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.653 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.653 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.911 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.911 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.911 04:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.170 04:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:21:13.114 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.114 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.114 04:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.114 04:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.114 04:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.114 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.115 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.115 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.376 04:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.377 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.377 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.634 00:21:13.634 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.634 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.634 04:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.892 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.892 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.892 04:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.892 04:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.892 04:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.892 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.892 { 00:21:13.892 "cntlid": 107, 00:21:13.892 "qid": 0, 00:21:13.892 "state": "enabled", 00:21:13.892 "listen_address": { 00:21:13.892 "trtype": "TCP", 00:21:13.892 "adrfam": "IPv4", 00:21:13.892 "traddr": "10.0.0.2", 00:21:13.892 "trsvcid": "4420" 00:21:13.892 }, 00:21:13.892 "peer_address": { 00:21:13.892 "trtype": "TCP", 00:21:13.892 "adrfam": "IPv4", 00:21:13.892 "traddr": "10.0.0.1", 00:21:13.892 "trsvcid": "45786" 00:21:13.892 }, 00:21:13.892 "auth": { 00:21:13.892 "state": "completed", 00:21:13.892 "digest": "sha512", 00:21:13.892 "dhgroup": "ffdhe2048" 00:21:13.892 } 00:21:13.892 } 00:21:13.892 ]' 00:21:13.892 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.892 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.892 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.149 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.149 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.149 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.149 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.149 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.407 04:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:21:15.337 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.337 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.337 04:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.337 04:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.337 04:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.337 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.337 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.337 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.594 04:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.852 00:21:15.852 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.852 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.852 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.109 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.109 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.109 04:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.109 04:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.109 04:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.109 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.109 { 00:21:16.109 "cntlid": 109, 00:21:16.109 "qid": 0, 00:21:16.109 "state": "enabled", 00:21:16.109 "listen_address": { 00:21:16.109 "trtype": "TCP", 00:21:16.109 "adrfam": "IPv4", 00:21:16.109 "traddr": "10.0.0.2", 00:21:16.109 "trsvcid": "4420" 00:21:16.109 }, 00:21:16.109 "peer_address": { 00:21:16.109 "trtype": "TCP", 00:21:16.109 "adrfam": "IPv4", 00:21:16.109 "traddr": "10.0.0.1", 00:21:16.109 "trsvcid": "45812" 00:21:16.109 }, 00:21:16.110 "auth": { 00:21:16.110 "state": "completed", 00:21:16.110 "digest": "sha512", 00:21:16.110 "dhgroup": "ffdhe2048" 00:21:16.110 } 00:21:16.110 } 00:21:16.110 ]' 00:21:16.110 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.366 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.366 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.366 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.366 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.366 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.366 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.366 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.623 04:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:21:17.560 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.560 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.560 04:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.560 04:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.560 04:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.560 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.560 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.560 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.818 04:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.386 00:21:18.386 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.386 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.386 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.644 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.645 { 00:21:18.645 "cntlid": 111, 00:21:18.645 "qid": 0, 00:21:18.645 "state": "enabled", 00:21:18.645 "listen_address": { 00:21:18.645 "trtype": "TCP", 00:21:18.645 "adrfam": "IPv4", 00:21:18.645 "traddr": "10.0.0.2", 00:21:18.645 "trsvcid": "4420" 00:21:18.645 }, 00:21:18.645 "peer_address": { 00:21:18.645 "trtype": "TCP", 00:21:18.645 "adrfam": "IPv4", 00:21:18.645 "traddr": "10.0.0.1", 00:21:18.645 "trsvcid": "52430" 00:21:18.645 }, 00:21:18.645 "auth": { 00:21:18.645 "state": "completed", 00:21:18.645 "digest": "sha512", 00:21:18.645 "dhgroup": "ffdhe2048" 00:21:18.645 } 00:21:18.645 } 00:21:18.645 ]' 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.645 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.903 04:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:21:19.841 04:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.841 04:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.841 04:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.841 04:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.841 04:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.841 04:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.841 04:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.841 04:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.841 04:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.100 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.667 00:21:20.667 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.667 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.667 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.924 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.925 { 00:21:20.925 "cntlid": 113, 00:21:20.925 "qid": 0, 00:21:20.925 "state": "enabled", 00:21:20.925 "listen_address": { 00:21:20.925 "trtype": "TCP", 00:21:20.925 "adrfam": "IPv4", 00:21:20.925 "traddr": "10.0.0.2", 00:21:20.925 "trsvcid": "4420" 00:21:20.925 }, 00:21:20.925 "peer_address": { 00:21:20.925 "trtype": "TCP", 00:21:20.925 "adrfam": "IPv4", 00:21:20.925 "traddr": "10.0.0.1", 00:21:20.925 "trsvcid": "52454" 00:21:20.925 }, 00:21:20.925 "auth": { 00:21:20.925 "state": "completed", 00:21:20.925 "digest": "sha512", 00:21:20.925 "dhgroup": "ffdhe3072" 00:21:20.925 } 00:21:20.925 } 00:21:20.925 ]' 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.925 04:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.925 04:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.925 04:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.925 04:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.183 04:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:21:22.118 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.118 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.118 04:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.118 04:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.118 04:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.118 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.118 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.118 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.377 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.636 00:21:22.636 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.636 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.636 04:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.894 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.894 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.894 04:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.894 04:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.894 04:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.894 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.894 { 00:21:22.894 "cntlid": 115, 00:21:22.894 "qid": 0, 00:21:22.894 "state": "enabled", 00:21:22.894 "listen_address": { 00:21:22.894 "trtype": "TCP", 00:21:22.894 "adrfam": "IPv4", 00:21:22.894 "traddr": "10.0.0.2", 00:21:22.894 "trsvcid": "4420" 00:21:22.894 }, 00:21:22.894 "peer_address": { 00:21:22.894 "trtype": "TCP", 00:21:22.894 "adrfam": "IPv4", 00:21:22.894 "traddr": "10.0.0.1", 00:21:22.894 "trsvcid": "52484" 00:21:22.894 }, 00:21:22.894 "auth": { 00:21:22.894 "state": "completed", 00:21:22.894 "digest": "sha512", 00:21:22.894 "dhgroup": "ffdhe3072" 00:21:22.894 } 00:21:22.894 } 00:21:22.894 ]' 00:21:22.894 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.160 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.160 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.160 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.160 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.160 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.160 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.160 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.418 04:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:21:24.356 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.356 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.356 04:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.356 04:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.356 04:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.356 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.356 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.356 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.614 04:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.226 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.226 { 00:21:25.226 "cntlid": 117, 00:21:25.226 "qid": 0, 00:21:25.226 "state": "enabled", 00:21:25.226 "listen_address": { 00:21:25.226 "trtype": "TCP", 00:21:25.226 "adrfam": "IPv4", 00:21:25.226 "traddr": "10.0.0.2", 00:21:25.226 "trsvcid": "4420" 00:21:25.226 }, 00:21:25.226 "peer_address": { 00:21:25.226 "trtype": "TCP", 00:21:25.226 "adrfam": "IPv4", 00:21:25.226 "traddr": "10.0.0.1", 00:21:25.226 "trsvcid": "52524" 00:21:25.226 }, 00:21:25.226 "auth": { 00:21:25.226 "state": "completed", 00:21:25.226 "digest": "sha512", 00:21:25.226 "dhgroup": "ffdhe3072" 00:21:25.226 } 00:21:25.226 } 00:21:25.226 ]' 00:21:25.226 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.485 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.485 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.485 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.485 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.485 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.485 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.485 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.742 04:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:21:26.678 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.678 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.678 04:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.678 04:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.678 04:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.678 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.678 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.678 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.936 04:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.195 00:21:27.195 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.195 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.195 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.762 { 00:21:27.762 "cntlid": 119, 00:21:27.762 "qid": 0, 00:21:27.762 "state": "enabled", 00:21:27.762 "listen_address": { 00:21:27.762 "trtype": "TCP", 00:21:27.762 "adrfam": "IPv4", 00:21:27.762 "traddr": "10.0.0.2", 00:21:27.762 "trsvcid": "4420" 00:21:27.762 }, 00:21:27.762 "peer_address": { 00:21:27.762 "trtype": "TCP", 00:21:27.762 "adrfam": "IPv4", 00:21:27.762 "traddr": "10.0.0.1", 00:21:27.762 "trsvcid": "52548" 00:21:27.762 }, 00:21:27.762 "auth": { 00:21:27.762 "state": "completed", 00:21:27.762 "digest": "sha512", 00:21:27.762 "dhgroup": "ffdhe3072" 00:21:27.762 } 00:21:27.762 } 00:21:27.762 ]' 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.762 04:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.021 04:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:21:28.958 04:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.958 04:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.958 04:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.958 04:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.958 04:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.958 04:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.958 04:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.958 04:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.958 04:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.216 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.782 00:21:29.782 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.783 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.783 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.783 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.783 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.783 04:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.783 04:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.783 04:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.783 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.783 { 00:21:29.783 "cntlid": 121, 00:21:29.783 "qid": 0, 00:21:29.783 "state": "enabled", 00:21:29.783 "listen_address": { 00:21:29.783 "trtype": "TCP", 00:21:29.783 "adrfam": "IPv4", 00:21:29.783 "traddr": "10.0.0.2", 00:21:29.783 "trsvcid": "4420" 00:21:29.783 }, 00:21:29.783 "peer_address": { 00:21:29.783 "trtype": "TCP", 00:21:29.783 "adrfam": "IPv4", 00:21:29.783 "traddr": "10.0.0.1", 00:21:29.783 "trsvcid": "34902" 00:21:29.783 }, 00:21:29.783 "auth": { 00:21:29.783 "state": "completed", 00:21:29.783 "digest": "sha512", 00:21:29.783 "dhgroup": "ffdhe4096" 00:21:29.783 } 00:21:29.783 } 00:21:29.783 ]' 00:21:29.783 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.039 04:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.039 04:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.039 04:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:30.039 04:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.039 04:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.039 04:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.039 04:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.296 04:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:21:31.234 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.234 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.234 04:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.234 04:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.234 04:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.234 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.234 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.234 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.490 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.747 00:21:31.747 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.747 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.747 04:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.006 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.006 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.006 04:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.006 04:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.263 04:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.263 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.263 { 00:21:32.263 "cntlid": 123, 00:21:32.263 "qid": 0, 00:21:32.263 "state": "enabled", 00:21:32.263 "listen_address": { 00:21:32.263 "trtype": "TCP", 00:21:32.263 "adrfam": "IPv4", 00:21:32.263 "traddr": "10.0.0.2", 00:21:32.263 "trsvcid": "4420" 00:21:32.263 }, 00:21:32.263 "peer_address": { 00:21:32.263 "trtype": "TCP", 00:21:32.263 "adrfam": "IPv4", 00:21:32.263 "traddr": "10.0.0.1", 00:21:32.263 "trsvcid": "34938" 00:21:32.263 }, 00:21:32.263 "auth": { 00:21:32.263 "state": "completed", 00:21:32.263 "digest": "sha512", 00:21:32.263 "dhgroup": "ffdhe4096" 00:21:32.263 } 00:21:32.263 } 00:21:32.263 ]' 00:21:32.263 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.263 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.263 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.264 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.264 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.264 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.264 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.264 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.520 04:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:21:33.457 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.457 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.457 04:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.457 04:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.457 04:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.457 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.457 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.457 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.714 04:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.281 00:21:34.281 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.281 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.281 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.539 { 00:21:34.539 "cntlid": 125, 00:21:34.539 "qid": 0, 00:21:34.539 "state": "enabled", 00:21:34.539 "listen_address": { 00:21:34.539 "trtype": "TCP", 00:21:34.539 "adrfam": "IPv4", 00:21:34.539 "traddr": "10.0.0.2", 00:21:34.539 "trsvcid": "4420" 00:21:34.539 }, 00:21:34.539 "peer_address": { 00:21:34.539 "trtype": "TCP", 00:21:34.539 "adrfam": "IPv4", 00:21:34.539 "traddr": "10.0.0.1", 00:21:34.539 "trsvcid": "34962" 00:21:34.539 }, 00:21:34.539 "auth": { 00:21:34.539 "state": "completed", 00:21:34.539 "digest": "sha512", 00:21:34.539 "dhgroup": "ffdhe4096" 00:21:34.539 } 00:21:34.539 } 00:21:34.539 ]' 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.539 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.797 04:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:21:35.732 04:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.732 04:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.732 04:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.732 04:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.732 04:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.732 04:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.732 04:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.732 04:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.989 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.552 00:21:36.552 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.552 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.552 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.809 { 00:21:36.809 "cntlid": 127, 00:21:36.809 "qid": 0, 00:21:36.809 "state": "enabled", 00:21:36.809 "listen_address": { 00:21:36.809 "trtype": "TCP", 00:21:36.809 "adrfam": "IPv4", 00:21:36.809 "traddr": "10.0.0.2", 00:21:36.809 "trsvcid": "4420" 00:21:36.809 }, 00:21:36.809 "peer_address": { 00:21:36.809 "trtype": "TCP", 00:21:36.809 "adrfam": "IPv4", 00:21:36.809 "traddr": "10.0.0.1", 00:21:36.809 "trsvcid": "35002" 00:21:36.809 }, 00:21:36.809 "auth": { 00:21:36.809 "state": "completed", 00:21:36.809 "digest": "sha512", 00:21:36.809 "dhgroup": "ffdhe4096" 00:21:36.809 } 00:21:36.809 } 00:21:36.809 ]' 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.809 04:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.069 04:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:21:38.007 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.007 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.007 04:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.007 04:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.007 04:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.007 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.007 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.007 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.007 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.574 04:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.180 00:21:39.180 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.180 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.180 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.180 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.180 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.180 04:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.180 04:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.439 { 00:21:39.439 "cntlid": 129, 00:21:39.439 "qid": 0, 00:21:39.439 "state": "enabled", 00:21:39.439 "listen_address": { 00:21:39.439 "trtype": "TCP", 00:21:39.439 "adrfam": "IPv4", 00:21:39.439 "traddr": "10.0.0.2", 00:21:39.439 "trsvcid": "4420" 00:21:39.439 }, 00:21:39.439 "peer_address": { 00:21:39.439 "trtype": "TCP", 00:21:39.439 "adrfam": "IPv4", 00:21:39.439 "traddr": "10.0.0.1", 00:21:39.439 "trsvcid": "52572" 00:21:39.439 }, 00:21:39.439 "auth": { 00:21:39.439 "state": "completed", 00:21:39.439 "digest": "sha512", 00:21:39.439 "dhgroup": "ffdhe6144" 00:21:39.439 } 00:21:39.439 } 00:21:39.439 ]' 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.439 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.697 04:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:21:40.632 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.632 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.632 04:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.632 04:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.632 04:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.632 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.632 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.632 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.890 04:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.458 00:21:41.458 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.458 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.458 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.717 { 00:21:41.717 "cntlid": 131, 00:21:41.717 "qid": 0, 00:21:41.717 "state": "enabled", 00:21:41.717 "listen_address": { 00:21:41.717 "trtype": "TCP", 00:21:41.717 "adrfam": "IPv4", 00:21:41.717 "traddr": "10.0.0.2", 00:21:41.717 "trsvcid": "4420" 00:21:41.717 }, 00:21:41.717 "peer_address": { 00:21:41.717 "trtype": "TCP", 00:21:41.717 "adrfam": "IPv4", 00:21:41.717 "traddr": "10.0.0.1", 00:21:41.717 "trsvcid": "52604" 00:21:41.717 }, 00:21:41.717 "auth": { 00:21:41.717 "state": "completed", 00:21:41.717 "digest": "sha512", 00:21:41.717 "dhgroup": "ffdhe6144" 00:21:41.717 } 00:21:41.717 } 00:21:41.717 ]' 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.717 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.976 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.976 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.976 04:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.234 04:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:21:43.174 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.174 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.174 04:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.174 04:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.174 04:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.174 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.174 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.174 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.438 04:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.006 00:21:44.006 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.006 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.006 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.263 { 00:21:44.263 "cntlid": 133, 00:21:44.263 "qid": 0, 00:21:44.263 "state": "enabled", 00:21:44.263 "listen_address": { 00:21:44.263 "trtype": "TCP", 00:21:44.263 "adrfam": "IPv4", 00:21:44.263 "traddr": "10.0.0.2", 00:21:44.263 "trsvcid": "4420" 00:21:44.263 }, 00:21:44.263 "peer_address": { 00:21:44.263 "trtype": "TCP", 00:21:44.263 "adrfam": "IPv4", 00:21:44.263 "traddr": "10.0.0.1", 00:21:44.263 "trsvcid": "52626" 00:21:44.263 }, 00:21:44.263 "auth": { 00:21:44.263 "state": "completed", 00:21:44.263 "digest": "sha512", 00:21:44.263 "dhgroup": "ffdhe6144" 00:21:44.263 } 00:21:44.263 } 00:21:44.263 ]' 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.263 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.264 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.264 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.264 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.264 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.523 04:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:21:45.460 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.460 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.460 04:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.460 04:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.718 04:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.718 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.718 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.718 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.977 04:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.546 00:21:46.546 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.546 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.546 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.804 { 00:21:46.804 "cntlid": 135, 00:21:46.804 "qid": 0, 00:21:46.804 "state": "enabled", 00:21:46.804 "listen_address": { 00:21:46.804 "trtype": "TCP", 00:21:46.804 "adrfam": "IPv4", 00:21:46.804 "traddr": "10.0.0.2", 00:21:46.804 "trsvcid": "4420" 00:21:46.804 }, 00:21:46.804 "peer_address": { 00:21:46.804 "trtype": "TCP", 00:21:46.804 "adrfam": "IPv4", 00:21:46.804 "traddr": "10.0.0.1", 00:21:46.804 "trsvcid": "52640" 00:21:46.804 }, 00:21:46.804 "auth": { 00:21:46.804 "state": "completed", 00:21:46.804 "digest": "sha512", 00:21:46.804 "dhgroup": "ffdhe6144" 00:21:46.804 } 00:21:46.804 } 00:21:46.804 ]' 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.804 04:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.062 04:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:21:48.000 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.000 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.000 04:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.000 04:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.000 04:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.000 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.000 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.001 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.001 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.260 04:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.196 00:21:49.196 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.196 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.196 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.460 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.460 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.460 04:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.460 04:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.460 04:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.460 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.460 { 00:21:49.460 "cntlid": 137, 00:21:49.460 "qid": 0, 00:21:49.460 "state": "enabled", 00:21:49.460 "listen_address": { 00:21:49.460 "trtype": "TCP", 00:21:49.460 "adrfam": "IPv4", 00:21:49.460 "traddr": "10.0.0.2", 00:21:49.460 "trsvcid": "4420" 00:21:49.460 }, 00:21:49.460 "peer_address": { 00:21:49.460 "trtype": "TCP", 00:21:49.460 "adrfam": "IPv4", 00:21:49.460 "traddr": "10.0.0.1", 00:21:49.460 "trsvcid": "42240" 00:21:49.460 }, 00:21:49.460 "auth": { 00:21:49.460 "state": "completed", 00:21:49.460 "digest": "sha512", 00:21:49.460 "dhgroup": "ffdhe8192" 00:21:49.460 } 00:21:49.460 } 00:21:49.460 ]' 00:21:49.460 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.460 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.460 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.717 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.717 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.717 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.717 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.717 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.974 04:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:21:50.906 04:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.906 04:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.906 04:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.906 04:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.906 04:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.906 04:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.907 04:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.907 04:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.164 04:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.099 00:21:52.099 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.099 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.099 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.358 { 00:21:52.358 "cntlid": 139, 00:21:52.358 "qid": 0, 00:21:52.358 "state": "enabled", 00:21:52.358 "listen_address": { 00:21:52.358 "trtype": "TCP", 00:21:52.358 "adrfam": "IPv4", 00:21:52.358 "traddr": "10.0.0.2", 00:21:52.358 "trsvcid": "4420" 00:21:52.358 }, 00:21:52.358 "peer_address": { 00:21:52.358 "trtype": "TCP", 00:21:52.358 "adrfam": "IPv4", 00:21:52.358 "traddr": "10.0.0.1", 00:21:52.358 "trsvcid": "42266" 00:21:52.358 }, 00:21:52.358 "auth": { 00:21:52.358 "state": "completed", 00:21:52.358 "digest": "sha512", 00:21:52.358 "dhgroup": "ffdhe8192" 00:21:52.358 } 00:21:52.358 } 00:21:52.358 ]' 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.358 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.932 04:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZDM3MThiYmRjY2YzODllNzViY2I4NzMzMjRlZThkMTYBysTu: --dhchap-ctrl-secret DHHC-1:02:YjUxYWEzNDAyMzgxY2Y2NTI2YzNjNWJlZDY3NTUyYzg1NTI2ZWI0YzRlNmZhN2Qy7dMWbQ==: 00:21:53.928 04:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.928 04:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.928 04:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.928 04:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.928 04:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.928 04:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.928 04:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.928 04:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.928 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.865 00:21:54.865 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.865 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.865 04:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.123 { 00:21:55.123 "cntlid": 141, 00:21:55.123 "qid": 0, 00:21:55.123 "state": "enabled", 00:21:55.123 "listen_address": { 00:21:55.123 "trtype": "TCP", 00:21:55.123 "adrfam": "IPv4", 00:21:55.123 "traddr": "10.0.0.2", 00:21:55.123 "trsvcid": "4420" 00:21:55.123 }, 00:21:55.123 "peer_address": { 00:21:55.123 "trtype": "TCP", 00:21:55.123 "adrfam": "IPv4", 00:21:55.123 "traddr": "10.0.0.1", 00:21:55.123 "trsvcid": "42286" 00:21:55.123 }, 00:21:55.123 "auth": { 00:21:55.123 "state": "completed", 00:21:55.123 "digest": "sha512", 00:21:55.123 "dhgroup": "ffdhe8192" 00:21:55.123 } 00:21:55.123 } 00:21:55.123 ]' 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.123 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.381 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.382 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.382 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.382 04:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MWVlZTRhOTIzNGVmYWI3MjBkMjYwZDg3OThjOWM0MjM4MmZiYjcwZDkzYmVhMzBmFlxh5w==: --dhchap-ctrl-secret DHHC-1:01:Mjc5YzUwZDNiMDY3YThjOWU5YTc5NWE3ZWYxN2EyMDnH6hJa: 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.758 04:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.724 00:21:57.724 04:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.724 04:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.724 04:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.980 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.980 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.980 04:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.981 { 00:21:57.981 "cntlid": 143, 00:21:57.981 "qid": 0, 00:21:57.981 "state": "enabled", 00:21:57.981 "listen_address": { 00:21:57.981 "trtype": "TCP", 00:21:57.981 "adrfam": "IPv4", 00:21:57.981 "traddr": "10.0.0.2", 00:21:57.981 "trsvcid": "4420" 00:21:57.981 }, 00:21:57.981 "peer_address": { 00:21:57.981 "trtype": "TCP", 00:21:57.981 "adrfam": "IPv4", 00:21:57.981 "traddr": "10.0.0.1", 00:21:57.981 "trsvcid": "42320" 00:21:57.981 }, 00:21:57.981 "auth": { 00:21:57.981 "state": "completed", 00:21:57.981 "digest": "sha512", 00:21:57.981 "dhgroup": "ffdhe8192" 00:21:57.981 } 00:21:57.981 } 00:21:57.981 ]' 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.981 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.239 04:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.611 04:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.549 00:22:00.549 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.549 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.549 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.807 { 00:22:00.807 "cntlid": 145, 00:22:00.807 "qid": 0, 00:22:00.807 "state": "enabled", 00:22:00.807 "listen_address": { 00:22:00.807 "trtype": "TCP", 00:22:00.807 "adrfam": "IPv4", 00:22:00.807 "traddr": "10.0.0.2", 00:22:00.807 "trsvcid": "4420" 00:22:00.807 }, 00:22:00.807 "peer_address": { 00:22:00.807 "trtype": "TCP", 00:22:00.807 "adrfam": "IPv4", 00:22:00.807 "traddr": "10.0.0.1", 00:22:00.807 "trsvcid": "43940" 00:22:00.807 }, 00:22:00.807 "auth": { 00:22:00.807 "state": "completed", 00:22:00.807 "digest": "sha512", 00:22:00.807 "dhgroup": "ffdhe8192" 00:22:00.807 } 00:22:00.807 } 00:22:00.807 ]' 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.807 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.808 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.808 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.808 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.808 04:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.065 04:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:M2NhMTlhOThiZWY5ZGZmZGRmODBhZmI2NDdjNGUwYjhiZmZhYmUyZWEyY2M1NWRjls8hcg==: --dhchap-ctrl-secret DHHC-1:03:ODdmZThmZGMzOGY0NDJiYTdkNjJmYjBlN2JmNGZhZDYwZWE3YmE0ZjNkYWZlMThlNTJjMjk4NWEyZjExNmU1NTSSwhg=: 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:02.002 04:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:02.937 request: 00:22:02.937 { 00:22:02.937 "name": "nvme0", 00:22:02.937 "trtype": "tcp", 00:22:02.937 "traddr": "10.0.0.2", 00:22:02.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.937 "adrfam": "ipv4", 00:22:02.937 "trsvcid": "4420", 00:22:02.937 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.937 "dhchap_key": "key2", 00:22:02.937 "method": "bdev_nvme_attach_controller", 00:22:02.937 "req_id": 1 00:22:02.937 } 00:22:02.937 Got JSON-RPC error response 00:22:02.937 response: 00:22:02.937 { 00:22:02.937 "code": -5, 00:22:02.937 "message": "Input/output error" 00:22:02.937 } 00:22:02.937 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:02.937 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.937 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.937 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.937 04:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.937 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.937 04:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.937 04:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.883 request: 00:22:03.883 { 00:22:03.883 "name": "nvme0", 00:22:03.883 "trtype": "tcp", 00:22:03.883 "traddr": "10.0.0.2", 00:22:03.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.883 "adrfam": "ipv4", 00:22:03.883 "trsvcid": "4420", 00:22:03.883 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.883 "dhchap_key": "key1", 00:22:03.883 "dhchap_ctrlr_key": "ckey2", 00:22:03.883 "method": "bdev_nvme_attach_controller", 00:22:03.883 "req_id": 1 00:22:03.883 } 00:22:03.883 Got JSON-RPC error response 00:22:03.883 response: 00:22:03.883 { 00:22:03.883 "code": -5, 00:22:03.883 "message": "Input/output error" 00:22:03.883 } 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.883 04:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.820 request: 00:22:04.820 { 00:22:04.820 "name": "nvme0", 00:22:04.820 "trtype": "tcp", 00:22:04.820 "traddr": "10.0.0.2", 00:22:04.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.820 "adrfam": "ipv4", 00:22:04.820 "trsvcid": "4420", 00:22:04.820 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.820 "dhchap_key": "key1", 00:22:04.820 "dhchap_ctrlr_key": "ckey1", 00:22:04.820 "method": "bdev_nvme_attach_controller", 00:22:04.820 "req_id": 1 00:22:04.820 } 00:22:04.820 Got JSON-RPC error response 00:22:04.820 response: 00:22:04.820 { 00:22:04.820 "code": -5, 00:22:04.820 "message": "Input/output error" 00:22:04.820 } 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2799477 00:22:04.820 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 2799477 ']' 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 2799477 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2799477 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2799477' 00:22:04.821 killing process with pid 2799477 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 2799477 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 2799477 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2822104 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2822104 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2822104 ']' 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.821 04:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2822104 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 2822104 ']' 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:05.079 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.339 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:05.339 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:05.339 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:05.339 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.339 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.597 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.597 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:05.597 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.597 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.597 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.598 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:05.598 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.598 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:05.598 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.598 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.598 04:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.598 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.598 04:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.561 00:22:06.561 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.561 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.561 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.561 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.561 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.561 04:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.561 04:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.819 { 00:22:06.819 "cntlid": 1, 00:22:06.819 "qid": 0, 00:22:06.819 "state": "enabled", 00:22:06.819 "listen_address": { 00:22:06.819 "trtype": "TCP", 00:22:06.819 "adrfam": "IPv4", 00:22:06.819 "traddr": "10.0.0.2", 00:22:06.819 "trsvcid": "4420" 00:22:06.819 }, 00:22:06.819 "peer_address": { 00:22:06.819 "trtype": "TCP", 00:22:06.819 "adrfam": "IPv4", 00:22:06.819 "traddr": "10.0.0.1", 00:22:06.819 "trsvcid": "43986" 00:22:06.819 }, 00:22:06.819 "auth": { 00:22:06.819 "state": "completed", 00:22:06.819 "digest": "sha512", 00:22:06.819 "dhgroup": "ffdhe8192" 00:22:06.819 } 00:22:06.819 } 00:22:06.819 ]' 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.819 04:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.077 04:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MjhkNmI3ZjM5MDBmM2NmN2FjNzVkYzE3YmI5N2ZiNDFlMjkxNjFkNjkzODZkYjQ5ZWIwMTkyNjYwOTlhYzhlY0cATyg=: 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:08.014 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:08.304 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.304 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:08.304 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.304 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:08.304 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.304 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:08.304 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.304 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.304 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.562 request: 00:22:08.562 { 00:22:08.562 "name": "nvme0", 00:22:08.562 "trtype": "tcp", 00:22:08.562 "traddr": "10.0.0.2", 00:22:08.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.562 "adrfam": "ipv4", 00:22:08.562 "trsvcid": "4420", 00:22:08.562 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.562 "dhchap_key": "key3", 00:22:08.562 "method": "bdev_nvme_attach_controller", 00:22:08.562 "req_id": 1 00:22:08.562 } 00:22:08.562 Got JSON-RPC error response 00:22:08.562 response: 00:22:08.562 { 00:22:08.562 "code": -5, 00:22:08.562 "message": "Input/output error" 00:22:08.562 } 00:22:08.562 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:08.562 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.562 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.562 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.562 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:08.562 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:08.562 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:08.562 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:08.820 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.820 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:08.820 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.820 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:08.820 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.820 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:08.820 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.820 04:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.820 04:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.078 request: 00:22:09.078 { 00:22:09.078 "name": "nvme0", 00:22:09.078 "trtype": "tcp", 00:22:09.078 "traddr": "10.0.0.2", 00:22:09.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.078 "adrfam": "ipv4", 00:22:09.078 "trsvcid": "4420", 00:22:09.078 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.078 "dhchap_key": "key3", 00:22:09.078 "method": "bdev_nvme_attach_controller", 00:22:09.078 "req_id": 1 00:22:09.078 } 00:22:09.078 Got JSON-RPC error response 00:22:09.078 response: 00:22:09.078 { 00:22:09.078 "code": -5, 00:22:09.078 "message": "Input/output error" 00:22:09.078 } 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.078 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:09.335 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.335 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.335 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:09.336 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:09.594 request: 00:22:09.594 { 00:22:09.594 "name": "nvme0", 00:22:09.594 "trtype": "tcp", 00:22:09.594 "traddr": "10.0.0.2", 00:22:09.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.594 "adrfam": "ipv4", 00:22:09.594 "trsvcid": "4420", 00:22:09.594 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.594 "dhchap_key": "key0", 00:22:09.594 "dhchap_ctrlr_key": "key1", 00:22:09.594 "method": "bdev_nvme_attach_controller", 00:22:09.594 "req_id": 1 00:22:09.594 } 00:22:09.594 Got JSON-RPC error response 00:22:09.594 response: 00:22:09.594 { 00:22:09.594 "code": -5, 00:22:09.594 "message": "Input/output error" 00:22:09.594 } 00:22:09.594 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:09.594 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:09.594 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:09.594 04:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:09.594 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:09.594 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:09.852 00:22:09.852 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:09.852 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:09.852 04:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.109 04:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.109 04:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.109 04:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2799560 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 2799560 ']' 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 2799560 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2799560 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2799560' 00:22:10.368 killing process with pid 2799560 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 2799560 00:22:10.368 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 2799560 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:10.944 rmmod nvme_tcp 00:22:10.944 rmmod nvme_fabrics 00:22:10.944 rmmod nvme_keyring 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2822104 ']' 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2822104 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 2822104 ']' 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 2822104 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:10.944 04:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2822104 00:22:10.944 04:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:10.944 04:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:10.944 04:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2822104' 00:22:10.944 killing process with pid 2822104 00:22:10.944 04:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 2822104 00:22:10.944 04:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 2822104 00:22:11.202 04:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.202 04:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.202 04:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.202 04:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.202 04:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.202 04:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.202 04:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.202 04:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.733 04:38:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:13.733 04:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.buT /tmp/spdk.key-sha256.vfl /tmp/spdk.key-sha384.zyq /tmp/spdk.key-sha512.MU8 /tmp/spdk.key-sha512.xbL /tmp/spdk.key-sha384.m5y /tmp/spdk.key-sha256.Kyw '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:13.733 00:22:13.733 real 3m9.644s 00:22:13.733 user 7m21.476s 00:22:13.733 sys 0m25.184s 00:22:13.733 04:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:13.733 04:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.733 ************************************ 00:22:13.733 END TEST nvmf_auth_target 00:22:13.733 ************************************ 00:22:13.733 04:38:33 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:13.733 04:38:33 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:13.733 04:38:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:13.733 04:38:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:13.733 04:38:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.733 ************************************ 00:22:13.733 START TEST nvmf_bdevio_no_huge 00:22:13.733 ************************************ 00:22:13.733 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:13.733 * Looking for test storage... 00:22:13.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:13.733 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.733 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.734 04:38:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:15.109 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:15.109 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:15.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:15.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.109 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:15.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:15.368 00:22:15.368 --- 10.0.0.2 ping statistics --- 00:22:15.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.368 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:22:15.368 00:22:15.368 --- 10.0.0.1 ping statistics --- 00:22:15.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.368 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2824869 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2824869 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 2824869 ']' 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:15.368 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.368 [2024-07-14 04:38:35.463555] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:15.368 [2024-07-14 04:38:35.463636] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:15.368 [2024-07-14 04:38:35.532239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.626 [2024-07-14 04:38:35.626256] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.626 [2024-07-14 04:38:35.626326] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.626 [2024-07-14 04:38:35.626344] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.626 [2024-07-14 04:38:35.626357] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.626 [2024-07-14 04:38:35.626369] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.626 [2024-07-14 04:38:35.626472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.626 [2024-07-14 04:38:35.626528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:15.626 [2024-07-14 04:38:35.626583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:15.626 [2024-07-14 04:38:35.626586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.626 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:15.626 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:15.626 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.626 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.626 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.626 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.627 [2024-07-14 04:38:35.751161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.627 Malloc0 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.627 [2024-07-14 04:38:35.789296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.627 { 00:22:15.627 "params": { 00:22:15.627 "name": "Nvme$subsystem", 00:22:15.627 "trtype": "$TEST_TRANSPORT", 00:22:15.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.627 "adrfam": "ipv4", 00:22:15.627 "trsvcid": "$NVMF_PORT", 00:22:15.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.627 "hdgst": ${hdgst:-false}, 00:22:15.627 "ddgst": ${ddgst:-false} 00:22:15.627 }, 00:22:15.627 "method": "bdev_nvme_attach_controller" 00:22:15.627 } 00:22:15.627 EOF 00:22:15.627 )") 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:15.627 04:38:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:15.627 "params": { 00:22:15.627 "name": "Nvme1", 00:22:15.627 "trtype": "tcp", 00:22:15.627 "traddr": "10.0.0.2", 00:22:15.627 "adrfam": "ipv4", 00:22:15.627 "trsvcid": "4420", 00:22:15.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.627 "hdgst": false, 00:22:15.627 "ddgst": false 00:22:15.627 }, 00:22:15.627 "method": "bdev_nvme_attach_controller" 00:22:15.627 }' 00:22:15.887 [2024-07-14 04:38:35.834610] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:15.887 [2024-07-14 04:38:35.834689] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2824902 ] 00:22:15.887 [2024-07-14 04:38:35.895614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.887 [2024-07-14 04:38:35.983141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.887 [2024-07-14 04:38:35.983193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.887 [2024-07-14 04:38:35.983196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.147 I/O targets: 00:22:16.147 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:16.147 00:22:16.147 00:22:16.147 CUnit - A unit testing framework for C - Version 2.1-3 00:22:16.147 http://cunit.sourceforge.net/ 00:22:16.147 00:22:16.147 00:22:16.147 Suite: bdevio tests on: Nvme1n1 00:22:16.147 Test: blockdev write read block ...passed 00:22:16.147 Test: blockdev write zeroes read block ...passed 00:22:16.147 Test: blockdev write zeroes read no split ...passed 00:22:16.406 Test: blockdev write zeroes read split ...passed 00:22:16.406 Test: blockdev write zeroes read split partial ...passed 00:22:16.406 Test: blockdev reset ...[2024-07-14 04:38:36.436359] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:16.406 [2024-07-14 04:38:36.436472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195c2a0 (9): Bad file descriptor 00:22:16.406 [2024-07-14 04:38:36.453903] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:16.406 passed 00:22:16.406 Test: blockdev write read 8 blocks ...passed 00:22:16.406 Test: blockdev write read size > 128k ...passed 00:22:16.406 Test: blockdev write read invalid size ...passed 00:22:16.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.406 Test: blockdev write read max offset ...passed 00:22:16.664 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.664 Test: blockdev writev readv 8 blocks ...passed 00:22:16.664 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.664 Test: blockdev writev readv block ...passed 00:22:16.664 Test: blockdev writev readv size > 128k ...passed 00:22:16.664 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.664 Test: blockdev comparev and writev ...[2024-07-14 04:38:36.671797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.664 [2024-07-14 04:38:36.671834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.671880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.664 [2024-07-14 04:38:36.671911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.672337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.664 [2024-07-14 04:38:36.672365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.672400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.664 [2024-07-14 04:38:36.672428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.672846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.664 [2024-07-14 04:38:36.672881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.672918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.664 [2024-07-14 04:38:36.672946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.673370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.664 [2024-07-14 04:38:36.673397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.673432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.664 [2024-07-14 04:38:36.673459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:16.664 passed 00:22:16.664 Test: blockdev nvme passthru rw ...passed 00:22:16.664 Test: blockdev nvme passthru vendor specific ...[2024-07-14 04:38:36.756256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.664 [2024-07-14 04:38:36.756285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.756514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.664 [2024-07-14 04:38:36.756541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.756770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.664 [2024-07-14 04:38:36.756796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:16.664 [2024-07-14 04:38:36.757035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.664 [2024-07-14 04:38:36.757067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:16.664 passed 00:22:16.664 Test: blockdev nvme admin passthru ...passed 00:22:16.664 Test: blockdev copy ...passed 00:22:16.664 00:22:16.664 Run Summary: Type Total Ran Passed Failed Inactive 00:22:16.664 suites 1 1 n/a 0 0 00:22:16.664 tests 23 23 23 0 0 00:22:16.664 asserts 152 152 152 0 n/a 00:22:16.664 00:22:16.664 Elapsed time = 1.186 seconds 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:17.232 rmmod nvme_tcp 00:22:17.232 rmmod nvme_fabrics 00:22:17.232 rmmod nvme_keyring 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2824869 ']' 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2824869 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 2824869 ']' 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 2824869 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2824869 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2824869' 00:22:17.232 killing process with pid 2824869 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 2824869 00:22:17.232 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 2824869 00:22:17.491 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:17.491 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:17.491 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:17.491 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:17.491 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:17.491 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.491 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.491 04:38:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.028 04:38:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.028 00:22:20.028 real 0m6.263s 00:22:20.028 user 0m10.335s 00:22:20.028 sys 0m2.374s 00:22:20.028 04:38:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:20.028 04:38:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.028 ************************************ 00:22:20.028 END TEST nvmf_bdevio_no_huge 00:22:20.028 ************************************ 00:22:20.028 04:38:39 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:20.028 04:38:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:20.028 04:38:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:20.028 04:38:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.028 ************************************ 00:22:20.028 START TEST nvmf_tls 00:22:20.028 ************************************ 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:20.028 * Looking for test storage... 00:22:20.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:20.028 04:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:21.403 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:21.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:21.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.403 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:21.404 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.404 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:21.662 00:22:21.662 --- 10.0.0.2 ping statistics --- 00:22:21.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.662 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:22:21.662 00:22:21.662 --- 10.0.0.1 ping statistics --- 00:22:21.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.662 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2826964 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2826964 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2826964 ']' 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:21.662 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.662 [2024-07-14 04:38:41.791876] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:21.662 [2024-07-14 04:38:41.791968] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.662 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.921 [2024-07-14 04:38:41.862522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.921 [2024-07-14 04:38:41.952212] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.921 [2024-07-14 04:38:41.952281] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.921 [2024-07-14 04:38:41.952298] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.921 [2024-07-14 04:38:41.952311] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.921 [2024-07-14 04:38:41.952323] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.921 [2024-07-14 04:38:41.952352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.921 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:21.921 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:21.921 04:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.921 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.921 04:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.921 04:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.921 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:21.921 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:22.179 true 00:22:22.179 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.179 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:22.437 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:22.437 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:22.437 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:22.696 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.696 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:22.956 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:22.956 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:22.956 04:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:23.217 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.217 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:23.515 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:23.515 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:23.515 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.515 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:23.774 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:23.774 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:23.774 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:24.032 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.032 04:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:24.291 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:24.291 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:24.291 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:24.550 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.550 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.eB4v01BZ3u 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.e0NSleXl0I 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.eB4v01BZ3u 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.e0NSleXl0I 00:22:24.809 04:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:25.069 04:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:25.328 04:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.eB4v01BZ3u 00:22:25.328 04:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eB4v01BZ3u 00:22:25.328 04:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.587 [2024-07-14 04:38:45.705027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.587 04:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.844 04:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:26.103 [2024-07-14 04:38:46.178318] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.103 [2024-07-14 04:38:46.178539] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.103 04:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:26.363 malloc0 00:22:26.363 04:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.622 04:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eB4v01BZ3u 00:22:26.882 [2024-07-14 04:38:46.920810] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:26.882 04:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.eB4v01BZ3u 00:22:26.882 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.864 Initializing NVMe Controllers 00:22:36.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.864 Initialization complete. Launching workers. 00:22:36.864 ======================================================== 00:22:36.864 Latency(us) 00:22:36.864 Device Information : IOPS MiB/s Average min max 00:22:36.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7501.36 29.30 8534.67 1259.28 9671.68 00:22:36.864 ======================================================== 00:22:36.864 Total : 7501.36 29.30 8534.67 1259.28 9671.68 00:22:36.864 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eB4v01BZ3u 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eB4v01BZ3u' 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2828777 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2828777 /var/tmp/bdevperf.sock 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2828777 ']' 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.865 04:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.122 [2024-07-14 04:38:57.080672] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:37.122 [2024-07-14 04:38:57.080758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828777 ] 00:22:37.122 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.122 [2024-07-14 04:38:57.140121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.122 [2024-07-14 04:38:57.226831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.378 04:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:37.378 04:38:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:37.378 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eB4v01BZ3u 00:22:37.637 [2024-07-14 04:38:57.609993] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.637 [2024-07-14 04:38:57.610114] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:37.637 TLSTESTn1 00:22:37.637 04:38:57 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:37.637 Running I/O for 10 seconds... 00:22:49.891 00:22:49.891 Latency(us) 00:22:49.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.891 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:49.891 Verification LBA range: start 0x0 length 0x2000 00:22:49.891 TLSTESTn1 : 10.06 1605.84 6.27 0.00 0.00 79475.87 9077.95 114178.28 00:22:49.891 =================================================================================================================== 00:22:49.891 Total : 1605.84 6.27 0.00 0.00 79475.87 9077.95 114178.28 00:22:49.891 0 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2828777 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2828777 ']' 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2828777 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2828777 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2828777' 00:22:49.891 killing process with pid 2828777 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2828777 00:22:49.891 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.891 00:22:49.891 Latency(us) 00:22:49.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.891 =================================================================================================================== 00:22:49.891 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:49.891 [2024-07-14 04:39:07.944597] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:49.891 04:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2828777 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e0NSleXl0I 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e0NSleXl0I 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e0NSleXl0I 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:49.891 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.e0NSleXl0I' 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2830451 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2830451 /var/tmp/bdevperf.sock 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2830451 ']' 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.892 [2024-07-14 04:39:08.199504] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:49.892 [2024-07-14 04:39:08.199591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830451 ] 00:22:49.892 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.892 [2024-07-14 04:39:08.259001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.892 [2024-07-14 04:39:08.350067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.e0NSleXl0I 00:22:49.892 [2024-07-14 04:39:08.696699] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.892 [2024-07-14 04:39:08.696814] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.892 [2024-07-14 04:39:08.703506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.892 [2024-07-14 04:39:08.704566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bf840 (107): Transport endpoint is not connected 00:22:49.892 [2024-07-14 04:39:08.705554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bf840 (9): Bad file descriptor 00:22:49.892 [2024-07-14 04:39:08.706554] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.892 [2024-07-14 04:39:08.706578] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.892 [2024-07-14 04:39:08.706604] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.892 request: 00:22:49.892 { 00:22:49.892 "name": "TLSTEST", 00:22:49.892 "trtype": "tcp", 00:22:49.892 "traddr": "10.0.0.2", 00:22:49.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.892 "adrfam": "ipv4", 00:22:49.892 "trsvcid": "4420", 00:22:49.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.892 "psk": "/tmp/tmp.e0NSleXl0I", 00:22:49.892 "method": "bdev_nvme_attach_controller", 00:22:49.892 "req_id": 1 00:22:49.892 } 00:22:49.892 Got JSON-RPC error response 00:22:49.892 response: 00:22:49.892 { 00:22:49.892 "code": -5, 00:22:49.892 "message": "Input/output error" 00:22:49.892 } 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2830451 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2830451 ']' 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2830451 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2830451 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2830451' 00:22:49.892 killing process with pid 2830451 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2830451 00:22:49.892 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.892 00:22:49.892 Latency(us) 00:22:49.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.892 =================================================================================================================== 00:22:49.892 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.892 [2024-07-14 04:39:08.756708] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2830451 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eB4v01BZ3u 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eB4v01BZ3u 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eB4v01BZ3u 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eB4v01BZ3u' 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2830756 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2830756 /var/tmp/bdevperf.sock 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2830756 ']' 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.892 04:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.892 [2024-07-14 04:39:09.004822] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:49.892 [2024-07-14 04:39:09.004931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830756 ] 00:22:49.892 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.892 [2024-07-14 04:39:09.065745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.892 [2024-07-14 04:39:09.154046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.892 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.892 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:49.892 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.eB4v01BZ3u 00:22:49.892 [2024-07-14 04:39:09.489129] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.892 [2024-07-14 04:39:09.489280] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.892 [2024-07-14 04:39:09.495506] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:49.892 [2024-07-14 04:39:09.495537] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:49.892 [2024-07-14 04:39:09.495581] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.893 [2024-07-14 04:39:09.496303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73d840 (107): Transport endpoint is not connected 00:22:49.893 [2024-07-14 04:39:09.497287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73d840 (9): Bad file descriptor 00:22:49.893 [2024-07-14 04:39:09.498286] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.893 [2024-07-14 04:39:09.498310] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.893 [2024-07-14 04:39:09.498337] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.893 request: 00:22:49.893 { 00:22:49.893 "name": "TLSTEST", 00:22:49.893 "trtype": "tcp", 00:22:49.893 "traddr": "10.0.0.2", 00:22:49.893 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:49.893 "adrfam": "ipv4", 00:22:49.893 "trsvcid": "4420", 00:22:49.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.893 "psk": "/tmp/tmp.eB4v01BZ3u", 00:22:49.893 "method": "bdev_nvme_attach_controller", 00:22:49.893 "req_id": 1 00:22:49.893 } 00:22:49.893 Got JSON-RPC error response 00:22:49.893 response: 00:22:49.893 { 00:22:49.893 "code": -5, 00:22:49.893 "message": "Input/output error" 00:22:49.893 } 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2830756 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2830756 ']' 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2830756 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2830756 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2830756' 00:22:49.893 killing process with pid 2830756 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2830756 00:22:49.893 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.893 00:22:49.893 Latency(us) 00:22:49.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.893 =================================================================================================================== 00:22:49.893 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.893 [2024-07-14 04:39:09.545916] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2830756 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eB4v01BZ3u 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eB4v01BZ3u 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eB4v01BZ3u 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eB4v01BZ3u' 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2830940 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2830940 /var/tmp/bdevperf.sock 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2830940 ']' 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.893 04:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.893 [2024-07-14 04:39:09.779616] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:49.893 [2024-07-14 04:39:09.779695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830940 ] 00:22:49.893 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.893 [2024-07-14 04:39:09.837298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.893 [2024-07-14 04:39:09.922380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.893 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.893 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:49.893 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eB4v01BZ3u 00:22:50.153 [2024-07-14 04:39:10.251064] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.153 [2024-07-14 04:39:10.251200] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:50.153 [2024-07-14 04:39:10.257455] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:50.153 [2024-07-14 04:39:10.257486] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:50.153 [2024-07-14 04:39:10.257524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:50.153 [2024-07-14 04:39:10.258248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd9840 (107): Transport endpoint is not connected 00:22:50.153 [2024-07-14 04:39:10.259221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd9840 (9): Bad file descriptor 00:22:50.153 [2024-07-14 04:39:10.260221] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:50.153 [2024-07-14 04:39:10.260248] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:50.153 [2024-07-14 04:39:10.260277] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:50.153 request: 00:22:50.153 { 00:22:50.153 "name": "TLSTEST", 00:22:50.153 "trtype": "tcp", 00:22:50.153 "traddr": "10.0.0.2", 00:22:50.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.153 "adrfam": "ipv4", 00:22:50.153 "trsvcid": "4420", 00:22:50.153 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.153 "psk": "/tmp/tmp.eB4v01BZ3u", 00:22:50.153 "method": "bdev_nvme_attach_controller", 00:22:50.153 "req_id": 1 00:22:50.153 } 00:22:50.153 Got JSON-RPC error response 00:22:50.153 response: 00:22:50.153 { 00:22:50.153 "code": -5, 00:22:50.153 "message": "Input/output error" 00:22:50.153 } 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2830940 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2830940 ']' 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2830940 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2830940 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2830940' 00:22:50.153 killing process with pid 2830940 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2830940 00:22:50.153 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.153 00:22:50.153 Latency(us) 00:22:50.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.153 =================================================================================================================== 00:22:50.153 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:50.153 [2024-07-14 04:39:10.304144] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.153 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2830940 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2830958 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2830958 /var/tmp/bdevperf.sock 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2830958 ']' 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.413 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.413 [2024-07-14 04:39:10.561945] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:50.413 [2024-07-14 04:39:10.562032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830958 ] 00:22:50.413 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.671 [2024-07-14 04:39:10.624473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.671 [2024-07-14 04:39:10.713049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.671 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:50.671 04:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:50.671 04:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:50.930 [2024-07-14 04:39:11.049565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:50.930 [2024-07-14 04:39:11.051804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e0f10 (9): Bad file descriptor 00:22:50.930 [2024-07-14 04:39:11.052800] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.930 [2024-07-14 04:39:11.052824] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:50.930 [2024-07-14 04:39:11.052864] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.930 request: 00:22:50.930 { 00:22:50.931 "name": "TLSTEST", 00:22:50.931 "trtype": "tcp", 00:22:50.931 "traddr": "10.0.0.2", 00:22:50.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.931 "adrfam": "ipv4", 00:22:50.931 "trsvcid": "4420", 00:22:50.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.931 "method": "bdev_nvme_attach_controller", 00:22:50.931 "req_id": 1 00:22:50.931 } 00:22:50.931 Got JSON-RPC error response 00:22:50.931 response: 00:22:50.931 { 00:22:50.931 "code": -5, 00:22:50.931 "message": "Input/output error" 00:22:50.931 } 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2830958 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2830958 ']' 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2830958 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2830958 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2830958' 00:22:50.931 killing process with pid 2830958 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2830958 00:22:50.931 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.931 00:22:50.931 Latency(us) 00:22:50.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.931 =================================================================================================================== 00:22:50.931 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:50.931 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2830958 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2826964 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2826964 ']' 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2826964 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2826964 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2826964' 00:22:51.191 killing process with pid 2826964 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2826964 00:22:51.191 [2024-07-14 04:39:11.347541] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:51.191 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2826964 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.N0E2ujdMon 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.N0E2ujdMon 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:51.451 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2831107 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2831107 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2831107 ']' 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:51.711 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.711 [2024-07-14 04:39:11.692692] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:51.711 [2024-07-14 04:39:11.692773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.711 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.711 [2024-07-14 04:39:11.763416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.711 [2024-07-14 04:39:11.852754] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.711 [2024-07-14 04:39:11.852809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.711 [2024-07-14 04:39:11.852838] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.711 [2024-07-14 04:39:11.852849] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.711 [2024-07-14 04:39:11.852859] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.711 [2024-07-14 04:39:11.852913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.000 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:52.000 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:52.000 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.000 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.000 04:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.000 04:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.000 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.N0E2ujdMon 00:22:52.000 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.N0E2ujdMon 00:22:52.000 04:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.258 [2024-07-14 04:39:12.216404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.258 04:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:52.516 04:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.774 [2024-07-14 04:39:12.797954] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.774 [2024-07-14 04:39:12.798175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.774 04:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:53.033 malloc0 00:22:53.033 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:53.291 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0E2ujdMon 00:22:53.548 [2024-07-14 04:39:13.544106] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0E2ujdMon 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.N0E2ujdMon' 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2831391 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2831391 /var/tmp/bdevperf.sock 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2831391 ']' 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.548 04:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.548 [2024-07-14 04:39:13.599836] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:53.548 [2024-07-14 04:39:13.599915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831391 ] 00:22:53.548 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.548 [2024-07-14 04:39:13.657548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.805 [2024-07-14 04:39:13.744072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.805 04:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.805 04:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:53.805 04:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0E2ujdMon 00:22:54.062 [2024-07-14 04:39:14.076456] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.062 [2024-07-14 04:39:14.076580] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:54.062 TLSTESTn1 00:22:54.062 04:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:54.322 Running I/O for 10 seconds... 00:23:04.305 00:23:04.305 Latency(us) 00:23:04.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.305 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:04.305 Verification LBA range: start 0x0 length 0x2000 00:23:04.305 TLSTESTn1 : 10.07 1681.24 6.57 0.00 0.00 75903.58 6747.78 109517.94 00:23:04.305 =================================================================================================================== 00:23:04.305 Total : 1681.24 6.57 0.00 0.00 75903.58 6747.78 109517.94 00:23:04.305 0 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2831391 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2831391 ']' 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2831391 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2831391 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2831391' 00:23:04.305 killing process with pid 2831391 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2831391 00:23:04.305 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.305 00:23:04.305 Latency(us) 00:23:04.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.305 =================================================================================================================== 00:23:04.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.305 [2024-07-14 04:39:24.405135] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:04.305 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2831391 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.N0E2ujdMon 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0E2ujdMon 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0E2ujdMon 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0E2ujdMon 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.N0E2ujdMon' 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2832706 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2832706 /var/tmp/bdevperf.sock 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2832706 ']' 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:04.564 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.564 [2024-07-14 04:39:24.671797] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:04.564 [2024-07-14 04:39:24.671895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832706 ] 00:23:04.564 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.564 [2024-07-14 04:39:24.729596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.822 [2024-07-14 04:39:24.814964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.822 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:04.822 04:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:04.822 04:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0E2ujdMon 00:23:05.081 [2024-07-14 04:39:25.196718] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.081 [2024-07-14 04:39:25.196795] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:05.081 [2024-07-14 04:39:25.196815] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.N0E2ujdMon 00:23:05.081 request: 00:23:05.081 { 00:23:05.081 "name": "TLSTEST", 00:23:05.081 "trtype": "tcp", 00:23:05.081 "traddr": "10.0.0.2", 00:23:05.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.081 "adrfam": "ipv4", 00:23:05.081 "trsvcid": "4420", 00:23:05.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.081 "psk": "/tmp/tmp.N0E2ujdMon", 00:23:05.081 "method": "bdev_nvme_attach_controller", 00:23:05.081 "req_id": 1 00:23:05.081 } 00:23:05.081 Got JSON-RPC error response 00:23:05.081 response: 00:23:05.081 { 00:23:05.081 "code": -1, 00:23:05.081 "message": "Operation not permitted" 00:23:05.081 } 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2832706 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2832706 ']' 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2832706 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2832706 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2832706' 00:23:05.081 killing process with pid 2832706 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2832706 00:23:05.081 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.081 00:23:05.081 Latency(us) 00:23:05.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.081 =================================================================================================================== 00:23:05.081 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.081 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2832706 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2831107 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2831107 ']' 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2831107 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2831107 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2831107' 00:23:05.340 killing process with pid 2831107 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2831107 00:23:05.340 [2024-07-14 04:39:25.490207] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:05.340 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2831107 00:23:05.598 04:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:05.598 04:39:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.598 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:05.598 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.598 04:39:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2832847 00:23:05.598 04:39:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:05.598 04:39:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2832847 00:23:05.598 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2832847 ']' 00:23:05.598 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.599 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.599 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.599 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.599 04:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.599 [2024-07-14 04:39:25.786392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:05.599 [2024-07-14 04:39:25.786478] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.857 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.857 [2024-07-14 04:39:25.855158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.857 [2024-07-14 04:39:25.941660] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.857 [2024-07-14 04:39:25.941726] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.857 [2024-07-14 04:39:25.941743] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.857 [2024-07-14 04:39:25.941756] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.857 [2024-07-14 04:39:25.941769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.857 [2024-07-14 04:39:25.941801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.N0E2ujdMon 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.N0E2ujdMon 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.N0E2ujdMon 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.N0E2ujdMon 00:23:06.115 04:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:06.373 [2024-07-14 04:39:26.310662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.373 04:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:06.632 04:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:06.632 [2024-07-14 04:39:26.804046] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.632 [2024-07-14 04:39:26.804297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.889 04:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:06.889 malloc0 00:23:06.889 04:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:07.146 04:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0E2ujdMon 00:23:07.404 [2024-07-14 04:39:27.525587] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:07.404 [2024-07-14 04:39:27.525629] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:07.404 [2024-07-14 04:39:27.525664] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:07.404 request: 00:23:07.404 { 00:23:07.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.404 "host": "nqn.2016-06.io.spdk:host1", 00:23:07.404 "psk": "/tmp/tmp.N0E2ujdMon", 00:23:07.404 "method": "nvmf_subsystem_add_host", 00:23:07.404 "req_id": 1 00:23:07.404 } 00:23:07.404 Got JSON-RPC error response 00:23:07.404 response: 00:23:07.404 { 00:23:07.404 "code": -32603, 00:23:07.404 "message": "Internal error" 00:23:07.404 } 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2832847 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2832847 ']' 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2832847 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2832847 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2832847' 00:23:07.404 killing process with pid 2832847 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2832847 00:23:07.404 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2832847 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.N0E2ujdMon 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2833141 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2833141 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2833141 ']' 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:07.663 04:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.922 [2024-07-14 04:39:27.871756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:07.922 [2024-07-14 04:39:27.871833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.922 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.922 [2024-07-14 04:39:27.936058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.922 [2024-07-14 04:39:28.021237] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.922 [2024-07-14 04:39:28.021290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.922 [2024-07-14 04:39:28.021319] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.922 [2024-07-14 04:39:28.021332] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.922 [2024-07-14 04:39:28.021342] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.923 [2024-07-14 04:39:28.021375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.181 04:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:08.181 04:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:08.181 04:39:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.181 04:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.181 04:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.181 04:39:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.181 04:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.N0E2ujdMon 00:23:08.181 04:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.N0E2ujdMon 00:23:08.181 04:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:08.441 [2024-07-14 04:39:28.384067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.441 04:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:08.700 04:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:08.700 [2024-07-14 04:39:28.873392] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.701 [2024-07-14 04:39:28.873634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.701 04:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:09.269 malloc0 00:23:09.269 04:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:09.528 04:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0E2ujdMon 00:23:09.787 [2024-07-14 04:39:29.736386] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2833308 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2833308 /var/tmp/bdevperf.sock 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2833308 ']' 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:09.787 04:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.787 [2024-07-14 04:39:29.791478] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:09.787 [2024-07-14 04:39:29.791566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833308 ] 00:23:09.787 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.787 [2024-07-14 04:39:29.867192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.787 [2024-07-14 04:39:29.964715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.046 04:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:10.046 04:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:10.046 04:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0E2ujdMon 00:23:10.304 [2024-07-14 04:39:30.371561] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.304 [2024-07-14 04:39:30.371702] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:10.304 TLSTESTn1 00:23:10.304 04:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:10.874 04:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:10.874 "subsystems": [ 00:23:10.874 { 00:23:10.874 "subsystem": "keyring", 00:23:10.874 "config": [] 00:23:10.874 }, 00:23:10.874 { 00:23:10.874 "subsystem": "iobuf", 00:23:10.874 "config": [ 00:23:10.874 { 00:23:10.874 "method": "iobuf_set_options", 00:23:10.874 "params": { 00:23:10.874 "small_pool_count": 8192, 00:23:10.874 "large_pool_count": 1024, 00:23:10.874 "small_bufsize": 8192, 00:23:10.874 "large_bufsize": 135168 00:23:10.874 } 00:23:10.874 } 00:23:10.874 ] 00:23:10.874 }, 00:23:10.874 { 00:23:10.874 "subsystem": "sock", 00:23:10.874 "config": [ 00:23:10.874 { 00:23:10.874 "method": "sock_set_default_impl", 00:23:10.874 "params": { 00:23:10.874 "impl_name": "posix" 00:23:10.874 } 00:23:10.874 }, 00:23:10.874 { 00:23:10.874 "method": "sock_impl_set_options", 00:23:10.874 "params": { 00:23:10.874 "impl_name": "ssl", 00:23:10.874 "recv_buf_size": 4096, 00:23:10.874 "send_buf_size": 4096, 00:23:10.874 "enable_recv_pipe": true, 00:23:10.874 "enable_quickack": false, 00:23:10.874 "enable_placement_id": 0, 00:23:10.874 "enable_zerocopy_send_server": true, 00:23:10.874 "enable_zerocopy_send_client": false, 00:23:10.874 "zerocopy_threshold": 0, 00:23:10.874 "tls_version": 0, 00:23:10.874 "enable_ktls": false 00:23:10.874 } 00:23:10.874 }, 00:23:10.874 { 00:23:10.874 "method": "sock_impl_set_options", 00:23:10.874 "params": { 00:23:10.874 "impl_name": "posix", 00:23:10.874 "recv_buf_size": 2097152, 00:23:10.874 "send_buf_size": 2097152, 00:23:10.874 "enable_recv_pipe": true, 00:23:10.874 "enable_quickack": false, 00:23:10.874 "enable_placement_id": 0, 00:23:10.874 "enable_zerocopy_send_server": true, 00:23:10.874 "enable_zerocopy_send_client": false, 00:23:10.874 "zerocopy_threshold": 0, 00:23:10.874 "tls_version": 0, 00:23:10.874 "enable_ktls": false 00:23:10.874 } 00:23:10.874 } 00:23:10.874 ] 00:23:10.874 }, 00:23:10.874 { 00:23:10.874 "subsystem": "vmd", 00:23:10.874 "config": [] 00:23:10.874 }, 00:23:10.874 { 00:23:10.874 "subsystem": "accel", 00:23:10.874 "config": [ 00:23:10.874 { 00:23:10.874 "method": "accel_set_options", 00:23:10.874 "params": { 00:23:10.875 "small_cache_size": 128, 00:23:10.875 "large_cache_size": 16, 00:23:10.875 "task_count": 2048, 00:23:10.875 "sequence_count": 2048, 00:23:10.875 "buf_count": 2048 00:23:10.875 } 00:23:10.875 } 00:23:10.875 ] 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "subsystem": "bdev", 00:23:10.875 "config": [ 00:23:10.875 { 00:23:10.875 "method": "bdev_set_options", 00:23:10.875 "params": { 00:23:10.875 "bdev_io_pool_size": 65535, 00:23:10.875 "bdev_io_cache_size": 256, 00:23:10.875 "bdev_auto_examine": true, 00:23:10.875 "iobuf_small_cache_size": 128, 00:23:10.875 "iobuf_large_cache_size": 16 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "bdev_raid_set_options", 00:23:10.875 "params": { 00:23:10.875 "process_window_size_kb": 1024 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "bdev_iscsi_set_options", 00:23:10.875 "params": { 00:23:10.875 "timeout_sec": 30 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "bdev_nvme_set_options", 00:23:10.875 "params": { 00:23:10.875 "action_on_timeout": "none", 00:23:10.875 "timeout_us": 0, 00:23:10.875 "timeout_admin_us": 0, 00:23:10.875 "keep_alive_timeout_ms": 10000, 00:23:10.875 "arbitration_burst": 0, 00:23:10.875 "low_priority_weight": 0, 00:23:10.875 "medium_priority_weight": 0, 00:23:10.875 "high_priority_weight": 0, 00:23:10.875 "nvme_adminq_poll_period_us": 10000, 00:23:10.875 "nvme_ioq_poll_period_us": 0, 00:23:10.875 "io_queue_requests": 0, 00:23:10.875 "delay_cmd_submit": true, 00:23:10.875 "transport_retry_count": 4, 00:23:10.875 "bdev_retry_count": 3, 00:23:10.875 "transport_ack_timeout": 0, 00:23:10.875 "ctrlr_loss_timeout_sec": 0, 00:23:10.875 "reconnect_delay_sec": 0, 00:23:10.875 "fast_io_fail_timeout_sec": 0, 00:23:10.875 "disable_auto_failback": false, 00:23:10.875 "generate_uuids": false, 00:23:10.875 "transport_tos": 0, 00:23:10.875 "nvme_error_stat": false, 00:23:10.875 "rdma_srq_size": 0, 00:23:10.875 "io_path_stat": false, 00:23:10.875 "allow_accel_sequence": false, 00:23:10.875 "rdma_max_cq_size": 0, 00:23:10.875 "rdma_cm_event_timeout_ms": 0, 00:23:10.875 "dhchap_digests": [ 00:23:10.875 "sha256", 00:23:10.875 "sha384", 00:23:10.875 "sha512" 00:23:10.875 ], 00:23:10.875 "dhchap_dhgroups": [ 00:23:10.875 "null", 00:23:10.875 "ffdhe2048", 00:23:10.875 "ffdhe3072", 00:23:10.875 "ffdhe4096", 00:23:10.875 "ffdhe6144", 00:23:10.875 "ffdhe8192" 00:23:10.875 ] 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "bdev_nvme_set_hotplug", 00:23:10.875 "params": { 00:23:10.875 "period_us": 100000, 00:23:10.875 "enable": false 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "bdev_malloc_create", 00:23:10.875 "params": { 00:23:10.875 "name": "malloc0", 00:23:10.875 "num_blocks": 8192, 00:23:10.875 "block_size": 4096, 00:23:10.875 "physical_block_size": 4096, 00:23:10.875 "uuid": "474b3d7a-22d7-4bae-8bd3-a50f60e9be99", 00:23:10.875 "optimal_io_boundary": 0 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "bdev_wait_for_examine" 00:23:10.875 } 00:23:10.875 ] 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "subsystem": "nbd", 00:23:10.875 "config": [] 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "subsystem": "scheduler", 00:23:10.875 "config": [ 00:23:10.875 { 00:23:10.875 "method": "framework_set_scheduler", 00:23:10.875 "params": { 00:23:10.875 "name": "static" 00:23:10.875 } 00:23:10.875 } 00:23:10.875 ] 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "subsystem": "nvmf", 00:23:10.875 "config": [ 00:23:10.875 { 00:23:10.875 "method": "nvmf_set_config", 00:23:10.875 "params": { 00:23:10.875 "discovery_filter": "match_any", 00:23:10.875 "admin_cmd_passthru": { 00:23:10.875 "identify_ctrlr": false 00:23:10.875 } 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "nvmf_set_max_subsystems", 00:23:10.875 "params": { 00:23:10.875 "max_subsystems": 1024 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "nvmf_set_crdt", 00:23:10.875 "params": { 00:23:10.875 "crdt1": 0, 00:23:10.875 "crdt2": 0, 00:23:10.875 "crdt3": 0 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "nvmf_create_transport", 00:23:10.875 "params": { 00:23:10.875 "trtype": "TCP", 00:23:10.875 "max_queue_depth": 128, 00:23:10.875 "max_io_qpairs_per_ctrlr": 127, 00:23:10.875 "in_capsule_data_size": 4096, 00:23:10.875 "max_io_size": 131072, 00:23:10.875 "io_unit_size": 131072, 00:23:10.875 "max_aq_depth": 128, 00:23:10.875 "num_shared_buffers": 511, 00:23:10.875 "buf_cache_size": 4294967295, 00:23:10.875 "dif_insert_or_strip": false, 00:23:10.875 "zcopy": false, 00:23:10.875 "c2h_success": false, 00:23:10.875 "sock_priority": 0, 00:23:10.875 "abort_timeout_sec": 1, 00:23:10.875 "ack_timeout": 0, 00:23:10.875 "data_wr_pool_size": 0 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "nvmf_create_subsystem", 00:23:10.875 "params": { 00:23:10.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.875 "allow_any_host": false, 00:23:10.875 "serial_number": "SPDK00000000000001", 00:23:10.875 "model_number": "SPDK bdev Controller", 00:23:10.875 "max_namespaces": 10, 00:23:10.875 "min_cntlid": 1, 00:23:10.875 "max_cntlid": 65519, 00:23:10.875 "ana_reporting": false 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "nvmf_subsystem_add_host", 00:23:10.875 "params": { 00:23:10.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.875 "host": "nqn.2016-06.io.spdk:host1", 00:23:10.875 "psk": "/tmp/tmp.N0E2ujdMon" 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "nvmf_subsystem_add_ns", 00:23:10.875 "params": { 00:23:10.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.875 "namespace": { 00:23:10.875 "nsid": 1, 00:23:10.875 "bdev_name": "malloc0", 00:23:10.875 "nguid": "474B3D7A22D74BAE8BD3A50F60E9BE99", 00:23:10.875 "uuid": "474b3d7a-22d7-4bae-8bd3-a50f60e9be99", 00:23:10.875 "no_auto_visible": false 00:23:10.875 } 00:23:10.875 } 00:23:10.875 }, 00:23:10.875 { 00:23:10.875 "method": "nvmf_subsystem_add_listener", 00:23:10.875 "params": { 00:23:10.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.875 "listen_address": { 00:23:10.875 "trtype": "TCP", 00:23:10.875 "adrfam": "IPv4", 00:23:10.875 "traddr": "10.0.0.2", 00:23:10.875 "trsvcid": "4420" 00:23:10.875 }, 00:23:10.875 "secure_channel": true 00:23:10.875 } 00:23:10.875 } 00:23:10.875 ] 00:23:10.875 } 00:23:10.875 ] 00:23:10.875 }' 00:23:10.875 04:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:11.135 04:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:11.135 "subsystems": [ 00:23:11.135 { 00:23:11.135 "subsystem": "keyring", 00:23:11.135 "config": [] 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "subsystem": "iobuf", 00:23:11.135 "config": [ 00:23:11.135 { 00:23:11.135 "method": "iobuf_set_options", 00:23:11.135 "params": { 00:23:11.135 "small_pool_count": 8192, 00:23:11.135 "large_pool_count": 1024, 00:23:11.135 "small_bufsize": 8192, 00:23:11.135 "large_bufsize": 135168 00:23:11.135 } 00:23:11.135 } 00:23:11.135 ] 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "subsystem": "sock", 00:23:11.135 "config": [ 00:23:11.135 { 00:23:11.135 "method": "sock_set_default_impl", 00:23:11.135 "params": { 00:23:11.135 "impl_name": "posix" 00:23:11.135 } 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "method": "sock_impl_set_options", 00:23:11.135 "params": { 00:23:11.135 "impl_name": "ssl", 00:23:11.135 "recv_buf_size": 4096, 00:23:11.135 "send_buf_size": 4096, 00:23:11.135 "enable_recv_pipe": true, 00:23:11.135 "enable_quickack": false, 00:23:11.135 "enable_placement_id": 0, 00:23:11.135 "enable_zerocopy_send_server": true, 00:23:11.135 "enable_zerocopy_send_client": false, 00:23:11.135 "zerocopy_threshold": 0, 00:23:11.135 "tls_version": 0, 00:23:11.135 "enable_ktls": false 00:23:11.135 } 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "method": "sock_impl_set_options", 00:23:11.135 "params": { 00:23:11.135 "impl_name": "posix", 00:23:11.135 "recv_buf_size": 2097152, 00:23:11.135 "send_buf_size": 2097152, 00:23:11.135 "enable_recv_pipe": true, 00:23:11.135 "enable_quickack": false, 00:23:11.135 "enable_placement_id": 0, 00:23:11.135 "enable_zerocopy_send_server": true, 00:23:11.135 "enable_zerocopy_send_client": false, 00:23:11.135 "zerocopy_threshold": 0, 00:23:11.135 "tls_version": 0, 00:23:11.135 "enable_ktls": false 00:23:11.135 } 00:23:11.135 } 00:23:11.135 ] 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "subsystem": "vmd", 00:23:11.135 "config": [] 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "subsystem": "accel", 00:23:11.135 "config": [ 00:23:11.135 { 00:23:11.135 "method": "accel_set_options", 00:23:11.135 "params": { 00:23:11.135 "small_cache_size": 128, 00:23:11.135 "large_cache_size": 16, 00:23:11.135 "task_count": 2048, 00:23:11.135 "sequence_count": 2048, 00:23:11.135 "buf_count": 2048 00:23:11.135 } 00:23:11.135 } 00:23:11.135 ] 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "subsystem": "bdev", 00:23:11.135 "config": [ 00:23:11.135 { 00:23:11.135 "method": "bdev_set_options", 00:23:11.135 "params": { 00:23:11.135 "bdev_io_pool_size": 65535, 00:23:11.135 "bdev_io_cache_size": 256, 00:23:11.135 "bdev_auto_examine": true, 00:23:11.135 "iobuf_small_cache_size": 128, 00:23:11.135 "iobuf_large_cache_size": 16 00:23:11.135 } 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "method": "bdev_raid_set_options", 00:23:11.135 "params": { 00:23:11.135 "process_window_size_kb": 1024 00:23:11.135 } 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "method": "bdev_iscsi_set_options", 00:23:11.135 "params": { 00:23:11.135 "timeout_sec": 30 00:23:11.135 } 00:23:11.135 }, 00:23:11.135 { 00:23:11.135 "method": "bdev_nvme_set_options", 00:23:11.135 "params": { 00:23:11.135 "action_on_timeout": "none", 00:23:11.135 "timeout_us": 0, 00:23:11.135 "timeout_admin_us": 0, 00:23:11.135 "keep_alive_timeout_ms": 10000, 00:23:11.135 "arbitration_burst": 0, 00:23:11.135 "low_priority_weight": 0, 00:23:11.135 "medium_priority_weight": 0, 00:23:11.135 "high_priority_weight": 0, 00:23:11.135 "nvme_adminq_poll_period_us": 10000, 00:23:11.135 "nvme_ioq_poll_period_us": 0, 00:23:11.135 "io_queue_requests": 512, 00:23:11.135 "delay_cmd_submit": true, 00:23:11.136 "transport_retry_count": 4, 00:23:11.136 "bdev_retry_count": 3, 00:23:11.136 "transport_ack_timeout": 0, 00:23:11.136 "ctrlr_loss_timeout_sec": 0, 00:23:11.136 "reconnect_delay_sec": 0, 00:23:11.136 "fast_io_fail_timeout_sec": 0, 00:23:11.136 "disable_auto_failback": false, 00:23:11.136 "generate_uuids": false, 00:23:11.136 "transport_tos": 0, 00:23:11.136 "nvme_error_stat": false, 00:23:11.136 "rdma_srq_size": 0, 00:23:11.136 "io_path_stat": false, 00:23:11.136 "allow_accel_sequence": false, 00:23:11.136 "rdma_max_cq_size": 0, 00:23:11.136 "rdma_cm_event_timeout_ms": 0, 00:23:11.136 "dhchap_digests": [ 00:23:11.136 "sha256", 00:23:11.136 "sha384", 00:23:11.136 "sha512" 00:23:11.136 ], 00:23:11.136 "dhchap_dhgroups": [ 00:23:11.136 "null", 00:23:11.136 "ffdhe2048", 00:23:11.136 "ffdhe3072", 00:23:11.136 "ffdhe4096", 00:23:11.136 "ffdhe6144", 00:23:11.136 "ffdhe8192" 00:23:11.136 ] 00:23:11.136 } 00:23:11.136 }, 00:23:11.136 { 00:23:11.136 "method": "bdev_nvme_attach_controller", 00:23:11.136 "params": { 00:23:11.136 "name": "TLSTEST", 00:23:11.136 "trtype": "TCP", 00:23:11.136 "adrfam": "IPv4", 00:23:11.136 "traddr": "10.0.0.2", 00:23:11.136 "trsvcid": "4420", 00:23:11.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.136 "prchk_reftag": false, 00:23:11.136 "prchk_guard": false, 00:23:11.136 "ctrlr_loss_timeout_sec": 0, 00:23:11.136 "reconnect_delay_sec": 0, 00:23:11.136 "fast_io_fail_timeout_sec": 0, 00:23:11.136 "psk": "/tmp/tmp.N0E2ujdMon", 00:23:11.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.136 "hdgst": false, 00:23:11.136 "ddgst": false 00:23:11.136 } 00:23:11.136 }, 00:23:11.136 { 00:23:11.136 "method": "bdev_nvme_set_hotplug", 00:23:11.136 "params": { 00:23:11.136 "period_us": 100000, 00:23:11.136 "enable": false 00:23:11.136 } 00:23:11.136 }, 00:23:11.136 { 00:23:11.136 "method": "bdev_wait_for_examine" 00:23:11.136 } 00:23:11.136 ] 00:23:11.136 }, 00:23:11.136 { 00:23:11.136 "subsystem": "nbd", 00:23:11.136 "config": [] 00:23:11.136 } 00:23:11.136 ] 00:23:11.136 }' 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2833308 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2833308 ']' 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2833308 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2833308 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2833308' 00:23:11.136 killing process with pid 2833308 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2833308 00:23:11.136 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.136 00:23:11.136 Latency(us) 00:23:11.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.136 =================================================================================================================== 00:23:11.136 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.136 [2024-07-14 04:39:31.149617] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:11.136 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2833308 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2833141 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2833141 ']' 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2833141 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2833141 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2833141' 00:23:11.395 killing process with pid 2833141 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2833141 00:23:11.395 [2024-07-14 04:39:31.392670] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:11.395 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2833141 00:23:11.682 04:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:11.682 04:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.682 04:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:11.682 "subsystems": [ 00:23:11.682 { 00:23:11.682 "subsystem": "keyring", 00:23:11.682 "config": [] 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "subsystem": "iobuf", 00:23:11.682 "config": [ 00:23:11.682 { 00:23:11.682 "method": "iobuf_set_options", 00:23:11.682 "params": { 00:23:11.682 "small_pool_count": 8192, 00:23:11.682 "large_pool_count": 1024, 00:23:11.682 "small_bufsize": 8192, 00:23:11.682 "large_bufsize": 135168 00:23:11.682 } 00:23:11.682 } 00:23:11.682 ] 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "subsystem": "sock", 00:23:11.682 "config": [ 00:23:11.682 { 00:23:11.682 "method": "sock_set_default_impl", 00:23:11.682 "params": { 00:23:11.682 "impl_name": "posix" 00:23:11.682 } 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "method": "sock_impl_set_options", 00:23:11.682 "params": { 00:23:11.682 "impl_name": "ssl", 00:23:11.682 "recv_buf_size": 4096, 00:23:11.682 "send_buf_size": 4096, 00:23:11.682 "enable_recv_pipe": true, 00:23:11.682 "enable_quickack": false, 00:23:11.682 "enable_placement_id": 0, 00:23:11.682 "enable_zerocopy_send_server": true, 00:23:11.682 "enable_zerocopy_send_client": false, 00:23:11.682 "zerocopy_threshold": 0, 00:23:11.682 "tls_version": 0, 00:23:11.682 "enable_ktls": false 00:23:11.682 } 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "method": "sock_impl_set_options", 00:23:11.682 "params": { 00:23:11.682 "impl_name": "posix", 00:23:11.682 "recv_buf_size": 2097152, 00:23:11.682 "send_buf_size": 2097152, 00:23:11.682 "enable_recv_pipe": true, 00:23:11.682 "enable_quickack": false, 00:23:11.682 "enable_placement_id": 0, 00:23:11.682 "enable_zerocopy_send_server": true, 00:23:11.682 "enable_zerocopy_send_client": false, 00:23:11.682 "zerocopy_threshold": 0, 00:23:11.682 "tls_version": 0, 00:23:11.682 "enable_ktls": false 00:23:11.682 } 00:23:11.682 } 00:23:11.682 ] 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "subsystem": "vmd", 00:23:11.682 "config": [] 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "subsystem": "accel", 00:23:11.682 "config": [ 00:23:11.682 { 00:23:11.682 "method": "accel_set_options", 00:23:11.682 "params": { 00:23:11.682 "small_cache_size": 128, 00:23:11.682 "large_cache_size": 16, 00:23:11.682 "task_count": 2048, 00:23:11.682 "sequence_count": 2048, 00:23:11.682 "buf_count": 2048 00:23:11.682 } 00:23:11.682 } 00:23:11.682 ] 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "subsystem": "bdev", 00:23:11.682 "config": [ 00:23:11.682 { 00:23:11.682 "method": "bdev_set_options", 00:23:11.682 "params": { 00:23:11.682 "bdev_io_pool_size": 65535, 00:23:11.682 "bdev_io_cache_size": 256, 00:23:11.682 "bdev_auto_examine": true, 00:23:11.682 "iobuf_small_cache_size": 128, 00:23:11.682 "iobuf_large_cache_size": 16 00:23:11.682 } 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "method": "bdev_raid_set_options", 00:23:11.682 "params": { 00:23:11.682 "process_window_size_kb": 1024 00:23:11.682 } 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "method": "bdev_iscsi_set_options", 00:23:11.682 "params": { 00:23:11.682 "timeout_sec": 30 00:23:11.682 } 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "method": "bdev_nvme_set_options", 00:23:11.682 "params": { 00:23:11.682 "action_on_timeout": "none", 00:23:11.682 "timeout_us": 0, 00:23:11.682 "timeout_admin_us": 0, 00:23:11.682 "keep_alive_timeout_ms": 10000, 00:23:11.682 "arbitration_burst": 0, 00:23:11.682 "low_priority_weight": 0, 00:23:11.682 "medium_priority_weight": 0, 00:23:11.682 "high_priority_weight": 0, 00:23:11.682 "nvme_adminq_poll_period_us": 10000, 00:23:11.682 "nvme_ioq_poll_period_us": 0, 00:23:11.682 "io_queue_requests": 0, 00:23:11.682 "delay_cmd_submit": true, 00:23:11.682 "transport_retry_count": 4, 00:23:11.682 "bdev_retry_count": 3, 00:23:11.682 "transport_ack_timeout": 0, 00:23:11.682 "ctrlr_loss_timeout_sec": 0, 00:23:11.682 "reconnect_delay_sec": 0, 00:23:11.682 "fast_io_fail_timeout_sec": 0, 00:23:11.682 "disable_auto_failback": false, 00:23:11.682 "generate_uuids": false, 00:23:11.682 "transport_tos": 0, 00:23:11.682 "nvme_error_stat": false, 00:23:11.682 "rdma_srq_size": 0, 00:23:11.682 "io_path_stat": false, 00:23:11.682 "allow_accel_sequence": false, 00:23:11.682 "rdma_max_cq_size": 0, 00:23:11.682 "rdma_cm_event_timeout_ms": 0, 00:23:11.682 "dhchap_digests": [ 00:23:11.682 "sha256", 00:23:11.682 "sha384", 00:23:11.682 "sha512" 00:23:11.682 ], 00:23:11.682 "dhchap_dhgroups": [ 00:23:11.682 "null", 00:23:11.682 "ffdhe2048", 00:23:11.682 "ffdhe3072", 00:23:11.682 "ffdhe4096", 00:23:11.682 "ffdhe6144", 00:23:11.682 "ffdhe8192" 00:23:11.682 ] 00:23:11.682 } 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "method": "bdev_nvme_set_hotplug", 00:23:11.682 "params": { 00:23:11.682 "period_us": 100000, 00:23:11.682 "enable": false 00:23:11.682 } 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "method": "bdev_malloc_create", 00:23:11.682 "params": { 00:23:11.682 "name": "malloc0", 00:23:11.682 "num_blocks": 8192, 00:23:11.682 "block_size": 4096, 00:23:11.682 "physical_block_size": 4096, 00:23:11.682 "uuid": "474b3d7a-22d7-4bae-8bd3-a50f60e9be99", 00:23:11.682 "optimal_io_boundary": 0 00:23:11.682 } 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "method": "bdev_wait_for_examine" 00:23:11.682 } 00:23:11.682 ] 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "subsystem": "nbd", 00:23:11.682 "config": [] 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "subsystem": "scheduler", 00:23:11.682 "config": [ 00:23:11.682 { 00:23:11.682 "method": "framework_set_scheduler", 00:23:11.682 "params": { 00:23:11.682 "name": "static" 00:23:11.682 } 00:23:11.682 } 00:23:11.682 ] 00:23:11.682 }, 00:23:11.682 { 00:23:11.682 "subsystem": "nvmf", 00:23:11.683 "config": [ 00:23:11.683 { 00:23:11.683 "method": "nvmf_set_config", 00:23:11.683 "params": { 00:23:11.683 "discovery_filter": "match_any", 00:23:11.683 "admin_cmd_passthru": { 00:23:11.683 "identify_ctrlr": false 00:23:11.683 } 00:23:11.683 } 00:23:11.683 }, 00:23:11.683 { 00:23:11.683 "method": "nvmf_set_max_subsystems", 00:23:11.683 "params": { 00:23:11.683 "max_subsystems": 1024 00:23:11.683 } 00:23:11.683 }, 00:23:11.683 { 00:23:11.683 "method": "nvmf_set_crdt", 00:23:11.683 "params": { 00:23:11.683 "crdt1": 0, 00:23:11.683 "crdt2": 0, 00:23:11.683 "crdt3": 0 00:23:11.683 } 00:23:11.683 }, 00:23:11.683 { 00:23:11.683 "method": "nvmf_create_transport", 00:23:11.683 "params": { 00:23:11.683 "trtype": "TCP", 00:23:11.683 "max_queue_depth": 128, 00:23:11.683 "max_io_qpairs_per_ctrlr": 127, 00:23:11.683 "in_capsule_data_size": 4096, 00:23:11.683 "max_io_size": 131072, 00:23:11.683 "io_unit_size": 131072, 00:23:11.683 "max_aq_depth": 128, 00:23:11.683 "num_shared_buffers": 511, 00:23:11.683 "buf_cache_size": 4294967295, 00:23:11.683 "dif_insert_or_strip": false, 00:23:11.683 "zcopy": false, 00:23:11.683 "c2h_success": false, 00:23:11.683 "sock_priority": 0, 00:23:11.683 "abort_timeout_sec": 1, 00:23:11.683 "ack_timeout": 0, 00:23:11.683 "data_wr_pool_size": 0 00:23:11.683 } 00:23:11.683 }, 00:23:11.683 { 00:23:11.683 "method": "nvmf_create_subsystem", 00:23:11.683 "params": { 00:23:11.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.683 "allow_any_host": false, 00:23:11.683 "serial_number": "SPDK00000000000001", 00:23:11.683 "model_number": "SPDK bdev Controller", 00:23:11.683 "max_namespaces": 10, 00:23:11.683 "min_cntlid": 1, 00:23:11.683 "max_cntlid": 65519, 00:23:11.683 "ana_reporting": false 00:23:11.683 } 00:23:11.683 }, 00:23:11.683 { 00:23:11.683 "method": "nvmf_subsystem_add_host", 00:23:11.683 "params": { 00:23:11.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.683 "host": "nqn.2016-06.io.spdk:host1", 00:23:11.683 "psk": "/tmp/tmp.N0E2ujdMon" 00:23:11.683 } 00:23:11.683 }, 00:23:11.683 { 00:23:11.683 "method": "nvmf_subsystem_add_ns", 00:23:11.683 "params": { 00:23:11.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.683 "namespace": { 00:23:11.683 "nsid": 1, 00:23:11.683 "bdev_name": "malloc0", 00:23:11.683 "nguid": "474B3D7A22D74BAE8BD3A50F60E9BE99", 00:23:11.683 "uuid": "474b3d7a-22d7-4bae-8bd3-a50f60e9be99", 00:23:11.683 "no_auto_visible": false 00:23:11.683 } 00:23:11.683 } 00:23:11.683 }, 00:23:11.683 { 00:23:11.683 "method": "nvmf_subsystem_add_listener", 00:23:11.683 "params": { 00:23:11.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.683 "listen_address": { 00:23:11.683 "trtype": "TCP", 00:23:11.683 "adrfam": "IPv4", 00:23:11.683 "traddr": "10.0.0.2", 00:23:11.683 "trsvcid": "4420" 00:23:11.683 }, 00:23:11.683 "secure_channel": true 00:23:11.683 } 00:23:11.683 } 00:23:11.683 ] 00:23:11.683 } 00:23:11.683 ] 00:23:11.683 }' 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2833584 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2833584 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2833584 ']' 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.683 04:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 [2024-07-14 04:39:31.688384] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:11.683 [2024-07-14 04:39:31.688463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.683 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.683 [2024-07-14 04:39:31.751071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.683 [2024-07-14 04:39:31.833771] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.683 [2024-07-14 04:39:31.833829] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.683 [2024-07-14 04:39:31.833857] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.683 [2024-07-14 04:39:31.833876] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.683 [2024-07-14 04:39:31.833886] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.683 [2024-07-14 04:39:31.833987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.943 [2024-07-14 04:39:32.066471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.943 [2024-07-14 04:39:32.082440] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:11.943 [2024-07-14 04:39:32.098490] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.943 [2024-07-14 04:39:32.112008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2833736 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2833736 /var/tmp/bdevperf.sock 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2833736 ']' 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.510 04:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:12.510 "subsystems": [ 00:23:12.510 { 00:23:12.510 "subsystem": "keyring", 00:23:12.510 "config": [] 00:23:12.510 }, 00:23:12.510 { 00:23:12.510 "subsystem": "iobuf", 00:23:12.510 "config": [ 00:23:12.510 { 00:23:12.510 "method": "iobuf_set_options", 00:23:12.510 "params": { 00:23:12.510 "small_pool_count": 8192, 00:23:12.510 "large_pool_count": 1024, 00:23:12.510 "small_bufsize": 8192, 00:23:12.510 "large_bufsize": 135168 00:23:12.510 } 00:23:12.510 } 00:23:12.510 ] 00:23:12.510 }, 00:23:12.510 { 00:23:12.510 "subsystem": "sock", 00:23:12.510 "config": [ 00:23:12.510 { 00:23:12.510 "method": "sock_set_default_impl", 00:23:12.510 "params": { 00:23:12.510 "impl_name": "posix" 00:23:12.510 } 00:23:12.510 }, 00:23:12.510 { 00:23:12.510 "method": "sock_impl_set_options", 00:23:12.510 "params": { 00:23:12.510 "impl_name": "ssl", 00:23:12.510 "recv_buf_size": 4096, 00:23:12.510 "send_buf_size": 4096, 00:23:12.510 "enable_recv_pipe": true, 00:23:12.510 "enable_quickack": false, 00:23:12.510 "enable_placement_id": 0, 00:23:12.510 "enable_zerocopy_send_server": true, 00:23:12.510 "enable_zerocopy_send_client": false, 00:23:12.510 "zerocopy_threshold": 0, 00:23:12.510 "tls_version": 0, 00:23:12.510 "enable_ktls": false 00:23:12.510 } 00:23:12.510 }, 00:23:12.510 { 00:23:12.510 "method": "sock_impl_set_options", 00:23:12.510 "params": { 00:23:12.510 "impl_name": "posix", 00:23:12.510 "recv_buf_size": 2097152, 00:23:12.510 "send_buf_size": 2097152, 00:23:12.510 "enable_recv_pipe": true, 00:23:12.510 "enable_quickack": false, 00:23:12.510 "enable_placement_id": 0, 00:23:12.510 "enable_zerocopy_send_server": true, 00:23:12.510 "enable_zerocopy_send_client": false, 00:23:12.510 "zerocopy_threshold": 0, 00:23:12.510 "tls_version": 0, 00:23:12.510 "enable_ktls": false 00:23:12.510 } 00:23:12.510 } 00:23:12.510 ] 00:23:12.510 }, 00:23:12.510 { 00:23:12.510 "subsystem": "vmd", 00:23:12.510 "config": [] 00:23:12.510 }, 00:23:12.510 { 00:23:12.510 "subsystem": "accel", 00:23:12.510 "config": [ 00:23:12.511 { 00:23:12.511 "method": "accel_set_options", 00:23:12.511 "params": { 00:23:12.511 "small_cache_size": 128, 00:23:12.511 "large_cache_size": 16, 00:23:12.511 "task_count": 2048, 00:23:12.511 "sequence_count": 2048, 00:23:12.511 "buf_count": 2048 00:23:12.511 } 00:23:12.511 } 00:23:12.511 ] 00:23:12.511 }, 00:23:12.511 { 00:23:12.511 "subsystem": "bdev", 00:23:12.511 "config": [ 00:23:12.511 { 00:23:12.511 "method": "bdev_set_options", 00:23:12.511 "params": { 00:23:12.511 "bdev_io_pool_size": 65535, 00:23:12.511 "bdev_io_cache_size": 256, 00:23:12.511 "bdev_auto_examine": true, 00:23:12.511 "iobuf_small_cache_size": 128, 00:23:12.511 "iobuf_large_cache_size": 16 00:23:12.511 } 00:23:12.511 }, 00:23:12.511 { 00:23:12.511 "method": "bdev_raid_set_options", 00:23:12.511 "params": { 00:23:12.511 "process_window_size_kb": 1024 00:23:12.511 } 00:23:12.511 }, 00:23:12.511 { 00:23:12.511 "method": "bdev_iscsi_set_options", 00:23:12.511 "params": { 00:23:12.511 "timeout_sec": 30 00:23:12.511 } 00:23:12.511 }, 00:23:12.511 { 00:23:12.511 "method": "bdev_nvme_set_options", 00:23:12.511 "params": { 00:23:12.511 "action_on_timeout": "none", 00:23:12.511 "timeout_us": 0, 00:23:12.511 "timeout_admin_us": 0, 00:23:12.511 "keep_alive_timeout_ms": 10000, 00:23:12.511 "arbitration_burst": 0, 00:23:12.511 "low_priority_weight": 0, 00:23:12.511 "medium_priority_weight": 0, 00:23:12.511 "high_priority_weight": 0, 00:23:12.511 "nvme_adminq_poll_period_us": 10000, 00:23:12.511 "nvme_ioq_poll_period_us": 0, 00:23:12.511 "io_queue_requests": 512, 00:23:12.511 "delay_cmd_submit": true, 00:23:12.511 "transport_retry_count": 4, 00:23:12.511 "bdev_retry_count": 3, 00:23:12.511 "transport_ack_timeout": 0, 00:23:12.511 "ctrlr_loss_timeout_sec": 0, 00:23:12.511 "reconnect_delay_sec": 0, 00:23:12.511 "fast_io_fail_timeout_sec": 0, 00:23:12.511 "disable_auto_failback": false, 00:23:12.511 "generate_uuids": false, 00:23:12.511 "transport_tos": 0, 00:23:12.511 "nvme_error_stat": false, 00:23:12.511 "rdma_srq_size": 0, 00:23:12.511 "io_path_stat": false, 00:23:12.511 "allow_accel_sequence": false, 00:23:12.511 "rdma_max_cq_size": 0, 00:23:12.511 "rdma_cm_event_timeout_ms": 0, 00:23:12.511 "dhchap_digests": [ 00:23:12.511 "sha256", 00:23:12.511 "sha384", 00:23:12.511 "sha512" 00:23:12.511 ], 00:23:12.511 "dhchap_dhgroups": [ 00:23:12.511 "null", 00:23:12.511 "ffdhe2048", 00:23:12.511 "ffdhe3072", 00:23:12.511 "ffdhe4096", 00:23:12.511 "ffd 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.511 he6144", 00:23:12.511 "ffdhe8192" 00:23:12.511 ] 00:23:12.511 } 00:23:12.511 }, 00:23:12.511 { 00:23:12.511 "method": "bdev_nvme_attach_controller", 00:23:12.511 "params": { 00:23:12.511 "name": "TLSTEST", 00:23:12.511 "trtype": "TCP", 00:23:12.511 "adrfam": "IPv4", 00:23:12.511 "traddr": "10.0.0.2", 00:23:12.511 "trsvcid": "4420", 00:23:12.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.511 "prchk_reftag": false, 00:23:12.511 "prchk_guard": false, 00:23:12.511 "ctrlr_loss_timeout_sec": 0, 00:23:12.511 "reconnect_delay_sec": 0, 00:23:12.511 "fast_io_fail_timeout_sec": 0, 00:23:12.511 "psk": "/tmp/tmp.N0E2ujdMon", 00:23:12.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.511 "hdgst": false, 00:23:12.511 "ddgst": false 00:23:12.511 } 00:23:12.511 }, 00:23:12.511 { 00:23:12.511 "method": "bdev_nvme_set_hotplug", 00:23:12.511 "params": { 00:23:12.511 "period_us": 100000, 00:23:12.511 "enable": false 00:23:12.511 } 00:23:12.511 }, 00:23:12.511 { 00:23:12.511 "method": "bdev_wait_for_examine" 00:23:12.511 } 00:23:12.511 ] 00:23:12.511 }, 00:23:12.511 { 00:23:12.511 "subsystem": "nbd", 00:23:12.511 "config": [] 00:23:12.511 } 00:23:12.511 ] 00:23:12.511 }' 00:23:12.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.511 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.511 04:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.511 [2024-07-14 04:39:32.695695] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:12.511 [2024-07-14 04:39:32.695770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833736 ] 00:23:12.771 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.771 [2024-07-14 04:39:32.755071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.771 [2024-07-14 04:39:32.837497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.030 [2024-07-14 04:39:33.005639] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.030 [2024-07-14 04:39:33.005795] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.613 04:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.613 04:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:13.613 04:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:13.613 Running I/O for 10 seconds... 00:23:25.817 00:23:25.817 Latency(us) 00:23:25.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.817 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:25.817 Verification LBA range: start 0x0 length 0x2000 00:23:25.817 TLSTESTn1 : 10.02 1014.71 3.96 0.00 0.00 125842.23 9175.04 119615.34 00:23:25.818 =================================================================================================================== 00:23:25.818 Total : 1014.71 3.96 0.00 0.00 125842.23 9175.04 119615.34 00:23:25.818 0 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2833736 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2833736 ']' 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2833736 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2833736 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2833736' 00:23:25.818 killing process with pid 2833736 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2833736 00:23:25.818 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.818 00:23:25.818 Latency(us) 00:23:25.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.818 =================================================================================================================== 00:23:25.818 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.818 [2024-07-14 04:39:43.860170] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:25.818 04:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2833736 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2833584 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2833584 ']' 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2833584 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2833584 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2833584' 00:23:25.818 killing process with pid 2833584 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2833584 00:23:25.818 [2024-07-14 04:39:44.111042] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2833584 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2835059 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2835059 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2835059 ']' 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.818 [2024-07-14 04:39:44.402930] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:25.818 [2024-07-14 04:39:44.403013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.818 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.818 [2024-07-14 04:39:44.465838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.818 [2024-07-14 04:39:44.549377] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.818 [2024-07-14 04:39:44.549443] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.818 [2024-07-14 04:39:44.549471] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.818 [2024-07-14 04:39:44.549483] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.818 [2024-07-14 04:39:44.549493] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.818 [2024-07-14 04:39:44.549521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.N0E2ujdMon 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.N0E2ujdMon 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:25.818 [2024-07-14 04:39:44.915975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.818 04:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:25.818 04:39:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.818 [2024-07-14 04:39:45.413295] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.818 [2024-07-14 04:39:45.413513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.818 04:39:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.818 malloc0 00:23:25.818 04:39:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:25.818 04:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0E2ujdMon 00:23:26.076 [2024-07-14 04:39:46.227509] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2835342 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2835342 /var/tmp/bdevperf.sock 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2835342 ']' 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:26.076 04:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.334 [2024-07-14 04:39:46.287316] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:26.334 [2024-07-14 04:39:46.287392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835342 ] 00:23:26.334 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.334 [2024-07-14 04:39:46.345327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.334 [2024-07-14 04:39:46.430383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.592 04:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:26.592 04:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:26.592 04:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.N0E2ujdMon 00:23:26.592 04:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:26.850 [2024-07-14 04:39:47.005281] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.107 nvme0n1 00:23:27.108 04:39:47 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.108 Running I/O for 1 seconds... 00:23:28.480 00:23:28.480 Latency(us) 00:23:28.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.480 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:28.480 Verification LBA range: start 0x0 length 0x2000 00:23:28.480 nvme0n1 : 1.06 1552.93 6.07 0.00 0.00 80331.78 6359.42 122722.23 00:23:28.480 =================================================================================================================== 00:23:28.480 Total : 1552.93 6.07 0.00 0.00 80331.78 6359.42 122722.23 00:23:28.480 0 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2835342 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2835342 ']' 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2835342 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2835342 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2835342' 00:23:28.480 killing process with pid 2835342 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2835342 00:23:28.480 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.480 00:23:28.480 Latency(us) 00:23:28.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.480 =================================================================================================================== 00:23:28.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2835342 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2835059 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2835059 ']' 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2835059 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2835059 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2835059' 00:23:28.480 killing process with pid 2835059 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2835059 00:23:28.480 [2024-07-14 04:39:48.566729] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:28.480 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2835059 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2835625 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2835625 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2835625 ']' 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.738 04:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.738 [2024-07-14 04:39:48.873891] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:28.738 [2024-07-14 04:39:48.873991] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.738 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.996 [2024-07-14 04:39:48.939994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.996 [2024-07-14 04:39:49.026697] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.996 [2024-07-14 04:39:49.026757] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.996 [2024-07-14 04:39:49.026786] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.996 [2024-07-14 04:39:49.026797] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.996 [2024-07-14 04:39:49.026806] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.996 [2024-07-14 04:39:49.026839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.996 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:28.996 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:28.996 04:39:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.996 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.997 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.997 04:39:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.997 04:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:28.997 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.997 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.997 [2024-07-14 04:39:49.171417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.255 malloc0 00:23:29.255 [2024-07-14 04:39:49.204917] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.255 [2024-07-14 04:39:49.205186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2835644 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2835644 /var/tmp/bdevperf.sock 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2835644 ']' 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:29.255 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.255 [2024-07-14 04:39:49.275125] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:29.255 [2024-07-14 04:39:49.275238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835644 ] 00:23:29.255 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.255 [2024-07-14 04:39:49.336258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.255 [2024-07-14 04:39:49.438666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.514 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.514 04:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:29.514 04:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.N0E2ujdMon 00:23:29.771 04:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:30.029 [2024-07-14 04:39:50.024154] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.029 nvme0n1 00:23:30.029 04:39:50 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.029 Running I/O for 1 seconds... 00:23:31.401 00:23:31.401 Latency(us) 00:23:31.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.401 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.401 Verification LBA range: start 0x0 length 0x2000 00:23:31.401 nvme0n1 : 1.06 1542.40 6.02 0.00 0.00 80873.77 9709.04 128159.29 00:23:31.401 =================================================================================================================== 00:23:31.401 Total : 1542.40 6.02 0.00 0.00 80873.77 9709.04 128159.29 00:23:31.401 0 00:23:31.401 04:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:31.401 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.401 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.401 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.401 04:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:31.401 "subsystems": [ 00:23:31.401 { 00:23:31.401 "subsystem": "keyring", 00:23:31.401 "config": [ 00:23:31.401 { 00:23:31.401 "method": "keyring_file_add_key", 00:23:31.401 "params": { 00:23:31.401 "name": "key0", 00:23:31.401 "path": "/tmp/tmp.N0E2ujdMon" 00:23:31.401 } 00:23:31.401 } 00:23:31.401 ] 00:23:31.401 }, 00:23:31.401 { 00:23:31.401 "subsystem": "iobuf", 00:23:31.401 "config": [ 00:23:31.401 { 00:23:31.401 "method": "iobuf_set_options", 00:23:31.401 "params": { 00:23:31.401 "small_pool_count": 8192, 00:23:31.401 "large_pool_count": 1024, 00:23:31.401 "small_bufsize": 8192, 00:23:31.401 "large_bufsize": 135168 00:23:31.401 } 00:23:31.401 } 00:23:31.401 ] 00:23:31.401 }, 00:23:31.401 { 00:23:31.401 "subsystem": "sock", 00:23:31.401 "config": [ 00:23:31.401 { 00:23:31.401 "method": "sock_set_default_impl", 00:23:31.401 "params": { 00:23:31.401 "impl_name": "posix" 00:23:31.401 } 00:23:31.401 }, 00:23:31.401 { 00:23:31.401 "method": "sock_impl_set_options", 00:23:31.401 "params": { 00:23:31.401 "impl_name": "ssl", 00:23:31.401 "recv_buf_size": 4096, 00:23:31.401 "send_buf_size": 4096, 00:23:31.401 "enable_recv_pipe": true, 00:23:31.401 "enable_quickack": false, 00:23:31.401 "enable_placement_id": 0, 00:23:31.401 "enable_zerocopy_send_server": true, 00:23:31.401 "enable_zerocopy_send_client": false, 00:23:31.401 "zerocopy_threshold": 0, 00:23:31.402 "tls_version": 0, 00:23:31.402 "enable_ktls": false 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "sock_impl_set_options", 00:23:31.402 "params": { 00:23:31.402 "impl_name": "posix", 00:23:31.402 "recv_buf_size": 2097152, 00:23:31.402 "send_buf_size": 2097152, 00:23:31.402 "enable_recv_pipe": true, 00:23:31.402 "enable_quickack": false, 00:23:31.402 "enable_placement_id": 0, 00:23:31.402 "enable_zerocopy_send_server": true, 00:23:31.402 "enable_zerocopy_send_client": false, 00:23:31.402 "zerocopy_threshold": 0, 00:23:31.402 "tls_version": 0, 00:23:31.402 "enable_ktls": false 00:23:31.402 } 00:23:31.402 } 00:23:31.402 ] 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "subsystem": "vmd", 00:23:31.402 "config": [] 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "subsystem": "accel", 00:23:31.402 "config": [ 00:23:31.402 { 00:23:31.402 "method": "accel_set_options", 00:23:31.402 "params": { 00:23:31.402 "small_cache_size": 128, 00:23:31.402 "large_cache_size": 16, 00:23:31.402 "task_count": 2048, 00:23:31.402 "sequence_count": 2048, 00:23:31.402 "buf_count": 2048 00:23:31.402 } 00:23:31.402 } 00:23:31.402 ] 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "subsystem": "bdev", 00:23:31.402 "config": [ 00:23:31.402 { 00:23:31.402 "method": "bdev_set_options", 00:23:31.402 "params": { 00:23:31.402 "bdev_io_pool_size": 65535, 00:23:31.402 "bdev_io_cache_size": 256, 00:23:31.402 "bdev_auto_examine": true, 00:23:31.402 "iobuf_small_cache_size": 128, 00:23:31.402 "iobuf_large_cache_size": 16 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "bdev_raid_set_options", 00:23:31.402 "params": { 00:23:31.402 "process_window_size_kb": 1024 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "bdev_iscsi_set_options", 00:23:31.402 "params": { 00:23:31.402 "timeout_sec": 30 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "bdev_nvme_set_options", 00:23:31.402 "params": { 00:23:31.402 "action_on_timeout": "none", 00:23:31.402 "timeout_us": 0, 00:23:31.402 "timeout_admin_us": 0, 00:23:31.402 "keep_alive_timeout_ms": 10000, 00:23:31.402 "arbitration_burst": 0, 00:23:31.402 "low_priority_weight": 0, 00:23:31.402 "medium_priority_weight": 0, 00:23:31.402 "high_priority_weight": 0, 00:23:31.402 "nvme_adminq_poll_period_us": 10000, 00:23:31.402 "nvme_ioq_poll_period_us": 0, 00:23:31.402 "io_queue_requests": 0, 00:23:31.402 "delay_cmd_submit": true, 00:23:31.402 "transport_retry_count": 4, 00:23:31.402 "bdev_retry_count": 3, 00:23:31.402 "transport_ack_timeout": 0, 00:23:31.402 "ctrlr_loss_timeout_sec": 0, 00:23:31.402 "reconnect_delay_sec": 0, 00:23:31.402 "fast_io_fail_timeout_sec": 0, 00:23:31.402 "disable_auto_failback": false, 00:23:31.402 "generate_uuids": false, 00:23:31.402 "transport_tos": 0, 00:23:31.402 "nvme_error_stat": false, 00:23:31.402 "rdma_srq_size": 0, 00:23:31.402 "io_path_stat": false, 00:23:31.402 "allow_accel_sequence": false, 00:23:31.402 "rdma_max_cq_size": 0, 00:23:31.402 "rdma_cm_event_timeout_ms": 0, 00:23:31.402 "dhchap_digests": [ 00:23:31.402 "sha256", 00:23:31.402 "sha384", 00:23:31.402 "sha512" 00:23:31.402 ], 00:23:31.402 "dhchap_dhgroups": [ 00:23:31.402 "null", 00:23:31.402 "ffdhe2048", 00:23:31.402 "ffdhe3072", 00:23:31.402 "ffdhe4096", 00:23:31.402 "ffdhe6144", 00:23:31.402 "ffdhe8192" 00:23:31.402 ] 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "bdev_nvme_set_hotplug", 00:23:31.402 "params": { 00:23:31.402 "period_us": 100000, 00:23:31.402 "enable": false 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "bdev_malloc_create", 00:23:31.402 "params": { 00:23:31.402 "name": "malloc0", 00:23:31.402 "num_blocks": 8192, 00:23:31.402 "block_size": 4096, 00:23:31.402 "physical_block_size": 4096, 00:23:31.402 "uuid": "522d696c-577b-4a1c-8c5c-d1d5d3303533", 00:23:31.402 "optimal_io_boundary": 0 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "bdev_wait_for_examine" 00:23:31.402 } 00:23:31.402 ] 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "subsystem": "nbd", 00:23:31.402 "config": [] 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "subsystem": "scheduler", 00:23:31.402 "config": [ 00:23:31.402 { 00:23:31.402 "method": "framework_set_scheduler", 00:23:31.402 "params": { 00:23:31.402 "name": "static" 00:23:31.402 } 00:23:31.402 } 00:23:31.402 ] 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "subsystem": "nvmf", 00:23:31.402 "config": [ 00:23:31.402 { 00:23:31.402 "method": "nvmf_set_config", 00:23:31.402 "params": { 00:23:31.402 "discovery_filter": "match_any", 00:23:31.402 "admin_cmd_passthru": { 00:23:31.402 "identify_ctrlr": false 00:23:31.402 } 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "nvmf_set_max_subsystems", 00:23:31.402 "params": { 00:23:31.402 "max_subsystems": 1024 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "nvmf_set_crdt", 00:23:31.402 "params": { 00:23:31.402 "crdt1": 0, 00:23:31.402 "crdt2": 0, 00:23:31.402 "crdt3": 0 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "nvmf_create_transport", 00:23:31.402 "params": { 00:23:31.402 "trtype": "TCP", 00:23:31.402 "max_queue_depth": 128, 00:23:31.402 "max_io_qpairs_per_ctrlr": 127, 00:23:31.402 "in_capsule_data_size": 4096, 00:23:31.402 "max_io_size": 131072, 00:23:31.402 "io_unit_size": 131072, 00:23:31.402 "max_aq_depth": 128, 00:23:31.402 "num_shared_buffers": 511, 00:23:31.402 "buf_cache_size": 4294967295, 00:23:31.402 "dif_insert_or_strip": false, 00:23:31.402 "zcopy": false, 00:23:31.402 "c2h_success": false, 00:23:31.402 "sock_priority": 0, 00:23:31.402 "abort_timeout_sec": 1, 00:23:31.402 "ack_timeout": 0, 00:23:31.402 "data_wr_pool_size": 0 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "nvmf_create_subsystem", 00:23:31.402 "params": { 00:23:31.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.402 "allow_any_host": false, 00:23:31.402 "serial_number": "00000000000000000000", 00:23:31.402 "model_number": "SPDK bdev Controller", 00:23:31.402 "max_namespaces": 32, 00:23:31.402 "min_cntlid": 1, 00:23:31.402 "max_cntlid": 65519, 00:23:31.402 "ana_reporting": false 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "nvmf_subsystem_add_host", 00:23:31.402 "params": { 00:23:31.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.402 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.402 "psk": "key0" 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "nvmf_subsystem_add_ns", 00:23:31.402 "params": { 00:23:31.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.402 "namespace": { 00:23:31.402 "nsid": 1, 00:23:31.402 "bdev_name": "malloc0", 00:23:31.402 "nguid": "522D696C577B4A1C8C5CD1D5D3303533", 00:23:31.402 "uuid": "522d696c-577b-4a1c-8c5c-d1d5d3303533", 00:23:31.402 "no_auto_visible": false 00:23:31.402 } 00:23:31.402 } 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "method": "nvmf_subsystem_add_listener", 00:23:31.402 "params": { 00:23:31.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.402 "listen_address": { 00:23:31.402 "trtype": "TCP", 00:23:31.402 "adrfam": "IPv4", 00:23:31.402 "traddr": "10.0.0.2", 00:23:31.402 "trsvcid": "4420" 00:23:31.402 }, 00:23:31.402 "secure_channel": true 00:23:31.402 } 00:23:31.402 } 00:23:31.402 ] 00:23:31.402 } 00:23:31.402 ] 00:23:31.402 }' 00:23:31.402 04:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:31.661 04:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:31.661 "subsystems": [ 00:23:31.661 { 00:23:31.661 "subsystem": "keyring", 00:23:31.661 "config": [ 00:23:31.661 { 00:23:31.661 "method": "keyring_file_add_key", 00:23:31.661 "params": { 00:23:31.661 "name": "key0", 00:23:31.661 "path": "/tmp/tmp.N0E2ujdMon" 00:23:31.661 } 00:23:31.661 } 00:23:31.661 ] 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "subsystem": "iobuf", 00:23:31.661 "config": [ 00:23:31.661 { 00:23:31.661 "method": "iobuf_set_options", 00:23:31.661 "params": { 00:23:31.661 "small_pool_count": 8192, 00:23:31.661 "large_pool_count": 1024, 00:23:31.661 "small_bufsize": 8192, 00:23:31.661 "large_bufsize": 135168 00:23:31.661 } 00:23:31.661 } 00:23:31.661 ] 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "subsystem": "sock", 00:23:31.661 "config": [ 00:23:31.661 { 00:23:31.661 "method": "sock_set_default_impl", 00:23:31.661 "params": { 00:23:31.661 "impl_name": "posix" 00:23:31.661 } 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "method": "sock_impl_set_options", 00:23:31.661 "params": { 00:23:31.661 "impl_name": "ssl", 00:23:31.661 "recv_buf_size": 4096, 00:23:31.661 "send_buf_size": 4096, 00:23:31.661 "enable_recv_pipe": true, 00:23:31.661 "enable_quickack": false, 00:23:31.661 "enable_placement_id": 0, 00:23:31.661 "enable_zerocopy_send_server": true, 00:23:31.661 "enable_zerocopy_send_client": false, 00:23:31.661 "zerocopy_threshold": 0, 00:23:31.661 "tls_version": 0, 00:23:31.661 "enable_ktls": false 00:23:31.661 } 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "method": "sock_impl_set_options", 00:23:31.661 "params": { 00:23:31.661 "impl_name": "posix", 00:23:31.661 "recv_buf_size": 2097152, 00:23:31.661 "send_buf_size": 2097152, 00:23:31.661 "enable_recv_pipe": true, 00:23:31.661 "enable_quickack": false, 00:23:31.661 "enable_placement_id": 0, 00:23:31.661 "enable_zerocopy_send_server": true, 00:23:31.661 "enable_zerocopy_send_client": false, 00:23:31.661 "zerocopy_threshold": 0, 00:23:31.661 "tls_version": 0, 00:23:31.661 "enable_ktls": false 00:23:31.661 } 00:23:31.661 } 00:23:31.661 ] 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "subsystem": "vmd", 00:23:31.661 "config": [] 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "subsystem": "accel", 00:23:31.661 "config": [ 00:23:31.661 { 00:23:31.661 "method": "accel_set_options", 00:23:31.661 "params": { 00:23:31.661 "small_cache_size": 128, 00:23:31.661 "large_cache_size": 16, 00:23:31.661 "task_count": 2048, 00:23:31.661 "sequence_count": 2048, 00:23:31.661 "buf_count": 2048 00:23:31.661 } 00:23:31.661 } 00:23:31.661 ] 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "subsystem": "bdev", 00:23:31.661 "config": [ 00:23:31.661 { 00:23:31.661 "method": "bdev_set_options", 00:23:31.661 "params": { 00:23:31.661 "bdev_io_pool_size": 65535, 00:23:31.661 "bdev_io_cache_size": 256, 00:23:31.661 "bdev_auto_examine": true, 00:23:31.661 "iobuf_small_cache_size": 128, 00:23:31.661 "iobuf_large_cache_size": 16 00:23:31.661 } 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "method": "bdev_raid_set_options", 00:23:31.661 "params": { 00:23:31.661 "process_window_size_kb": 1024 00:23:31.661 } 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "method": "bdev_iscsi_set_options", 00:23:31.661 "params": { 00:23:31.661 "timeout_sec": 30 00:23:31.661 } 00:23:31.661 }, 00:23:31.661 { 00:23:31.661 "method": "bdev_nvme_set_options", 00:23:31.661 "params": { 00:23:31.661 "action_on_timeout": "none", 00:23:31.661 "timeout_us": 0, 00:23:31.661 "timeout_admin_us": 0, 00:23:31.661 "keep_alive_timeout_ms": 10000, 00:23:31.661 "arbitration_burst": 0, 00:23:31.661 "low_priority_weight": 0, 00:23:31.661 "medium_priority_weight": 0, 00:23:31.661 "high_priority_weight": 0, 00:23:31.661 "nvme_adminq_poll_period_us": 10000, 00:23:31.661 "nvme_ioq_poll_period_us": 0, 00:23:31.662 "io_queue_requests": 512, 00:23:31.662 "delay_cmd_submit": true, 00:23:31.662 "transport_retry_count": 4, 00:23:31.662 "bdev_retry_count": 3, 00:23:31.662 "transport_ack_timeout": 0, 00:23:31.662 "ctrlr_loss_timeout_sec": 0, 00:23:31.662 "reconnect_delay_sec": 0, 00:23:31.662 "fast_io_fail_timeout_sec": 0, 00:23:31.662 "disable_auto_failback": false, 00:23:31.662 "generate_uuids": false, 00:23:31.662 "transport_tos": 0, 00:23:31.662 "nvme_error_stat": false, 00:23:31.662 "rdma_srq_size": 0, 00:23:31.662 "io_path_stat": false, 00:23:31.662 "allow_accel_sequence": false, 00:23:31.662 "rdma_max_cq_size": 0, 00:23:31.662 "rdma_cm_event_timeout_ms": 0, 00:23:31.662 "dhchap_digests": [ 00:23:31.662 "sha256", 00:23:31.662 "sha384", 00:23:31.662 "sha512" 00:23:31.662 ], 00:23:31.662 "dhchap_dhgroups": [ 00:23:31.662 "null", 00:23:31.662 "ffdhe2048", 00:23:31.662 "ffdhe3072", 00:23:31.662 "ffdhe4096", 00:23:31.662 "ffdhe6144", 00:23:31.662 "ffdhe8192" 00:23:31.662 ] 00:23:31.662 } 00:23:31.662 }, 00:23:31.662 { 00:23:31.662 "method": "bdev_nvme_attach_controller", 00:23:31.662 "params": { 00:23:31.662 "name": "nvme0", 00:23:31.662 "trtype": "TCP", 00:23:31.662 "adrfam": "IPv4", 00:23:31.662 "traddr": "10.0.0.2", 00:23:31.662 "trsvcid": "4420", 00:23:31.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.662 "prchk_reftag": false, 00:23:31.662 "prchk_guard": false, 00:23:31.662 "ctrlr_loss_timeout_sec": 0, 00:23:31.662 "reconnect_delay_sec": 0, 00:23:31.662 "fast_io_fail_timeout_sec": 0, 00:23:31.662 "psk": "key0", 00:23:31.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.662 "hdgst": false, 00:23:31.662 "ddgst": false 00:23:31.662 } 00:23:31.662 }, 00:23:31.662 { 00:23:31.662 "method": "bdev_nvme_set_hotplug", 00:23:31.662 "params": { 00:23:31.662 "period_us": 100000, 00:23:31.662 "enable": false 00:23:31.662 } 00:23:31.662 }, 00:23:31.662 { 00:23:31.662 "method": "bdev_enable_histogram", 00:23:31.662 "params": { 00:23:31.662 "name": "nvme0n1", 00:23:31.662 "enable": true 00:23:31.662 } 00:23:31.662 }, 00:23:31.662 { 00:23:31.662 "method": "bdev_wait_for_examine" 00:23:31.662 } 00:23:31.662 ] 00:23:31.662 }, 00:23:31.662 { 00:23:31.662 "subsystem": "nbd", 00:23:31.662 "config": [] 00:23:31.662 } 00:23:31.662 ] 00:23:31.662 }' 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2835644 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2835644 ']' 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2835644 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2835644 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2835644' 00:23:31.662 killing process with pid 2835644 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2835644 00:23:31.662 Received shutdown signal, test time was about 1.000000 seconds 00:23:31.662 00:23:31.662 Latency(us) 00:23:31.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.662 =================================================================================================================== 00:23:31.662 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.662 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2835644 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2835625 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2835625 ']' 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2835625 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2835625 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2835625' 00:23:31.921 killing process with pid 2835625 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2835625 00:23:31.921 04:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2835625 00:23:32.179 04:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:32.180 04:39:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.180 04:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:32.180 "subsystems": [ 00:23:32.180 { 00:23:32.180 "subsystem": "keyring", 00:23:32.180 "config": [ 00:23:32.180 { 00:23:32.180 "method": "keyring_file_add_key", 00:23:32.180 "params": { 00:23:32.180 "name": "key0", 00:23:32.180 "path": "/tmp/tmp.N0E2ujdMon" 00:23:32.180 } 00:23:32.180 } 00:23:32.180 ] 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "subsystem": "iobuf", 00:23:32.180 "config": [ 00:23:32.180 { 00:23:32.180 "method": "iobuf_set_options", 00:23:32.180 "params": { 00:23:32.180 "small_pool_count": 8192, 00:23:32.180 "large_pool_count": 1024, 00:23:32.180 "small_bufsize": 8192, 00:23:32.180 "large_bufsize": 135168 00:23:32.180 } 00:23:32.180 } 00:23:32.180 ] 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "subsystem": "sock", 00:23:32.180 "config": [ 00:23:32.180 { 00:23:32.180 "method": "sock_set_default_impl", 00:23:32.180 "params": { 00:23:32.180 "impl_name": "posix" 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "sock_impl_set_options", 00:23:32.180 "params": { 00:23:32.180 "impl_name": "ssl", 00:23:32.180 "recv_buf_size": 4096, 00:23:32.180 "send_buf_size": 4096, 00:23:32.180 "enable_recv_pipe": true, 00:23:32.180 "enable_quickack": false, 00:23:32.180 "enable_placement_id": 0, 00:23:32.180 "enable_zerocopy_send_server": true, 00:23:32.180 "enable_zerocopy_send_client": false, 00:23:32.180 "zerocopy_threshold": 0, 00:23:32.180 "tls_version": 0, 00:23:32.180 "enable_ktls": false 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "sock_impl_set_options", 00:23:32.180 "params": { 00:23:32.180 "impl_name": "posix", 00:23:32.180 "recv_buf_size": 2097152, 00:23:32.180 "send_buf_size": 2097152, 00:23:32.180 "enable_recv_pipe": true, 00:23:32.180 "enable_quickack": false, 00:23:32.180 "enable_placement_id": 0, 00:23:32.180 "enable_zerocopy_send_server": true, 00:23:32.180 "enable_zerocopy_send_client": false, 00:23:32.180 "zerocopy_threshold": 0, 00:23:32.180 "tls_version": 0, 00:23:32.180 "enable_ktls": false 00:23:32.180 } 00:23:32.180 } 00:23:32.180 ] 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "subsystem": "vmd", 00:23:32.180 "config": [] 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "subsystem": "accel", 00:23:32.180 "config": [ 00:23:32.180 { 00:23:32.180 "method": "accel_set_options", 00:23:32.180 "params": { 00:23:32.180 "small_cache_size": 128, 00:23:32.180 "large_cache_size": 16, 00:23:32.180 "task_count": 2048, 00:23:32.180 "sequence_count": 2048, 00:23:32.180 "buf_count": 2048 00:23:32.180 } 00:23:32.180 } 00:23:32.180 ] 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "subsystem": "bdev", 00:23:32.180 "config": [ 00:23:32.180 { 00:23:32.180 "method": "bdev_set_options", 00:23:32.180 "params": { 00:23:32.180 "bdev_io_pool_size": 65535, 00:23:32.180 "bdev_io_cache_size": 256, 00:23:32.180 "bdev_auto_examine": true, 00:23:32.180 "iobuf_small_cache_size": 128, 00:23:32.180 "iobuf_large_cache_size": 16 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "bdev_raid_set_options", 00:23:32.180 "params": { 00:23:32.180 "process_window_size_kb": 1024 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "bdev_iscsi_set_options", 00:23:32.180 "params": { 00:23:32.180 "timeout_sec": 30 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "bdev_nvme_set_options", 00:23:32.180 "params": { 00:23:32.180 "action_on_timeout": "none", 00:23:32.180 "timeout_us": 0, 00:23:32.180 "timeout_admin_us": 0, 00:23:32.180 "keep_alive_timeout_ms": 10000, 00:23:32.180 "arbitration_burst": 0, 00:23:32.180 "low_priority_weight": 0, 00:23:32.180 "medium_priority_weight": 0, 00:23:32.180 "high_priority_weight": 0, 00:23:32.180 "nvme_adminq_poll_period_us": 10000, 00:23:32.180 "nvme_ioq_poll_period_us": 0, 00:23:32.180 "io_queue_requests": 0, 00:23:32.180 "delay_cmd_submit": true, 00:23:32.180 "transport_retry_count": 4, 00:23:32.180 "bdev_retry_count": 3, 00:23:32.180 "transport_ack_timeout": 0, 00:23:32.180 "ctrlr_loss_timeout_sec": 0, 00:23:32.180 "reconnect_delay_sec": 0, 00:23:32.180 "fast_io_fail_timeout_sec": 0, 00:23:32.180 "disable_auto_failback": false, 00:23:32.180 "generate_uuids": false, 00:23:32.180 "transport_tos": 0, 00:23:32.180 "nvme_error_stat": false, 00:23:32.180 "rdma_srq_size": 0, 00:23:32.180 "io_path_stat": false, 00:23:32.180 "allow_accel_sequence": false, 00:23:32.180 "rdma_max_cq_size": 0, 00:23:32.180 "rdma_cm_event_timeout_ms": 0, 00:23:32.180 "dhchap_digests": [ 00:23:32.180 "sha256", 00:23:32.180 "sha384", 00:23:32.180 "sha512" 00:23:32.180 ], 00:23:32.180 "dhchap_dhgroups": [ 00:23:32.180 "null", 00:23:32.180 "ffdhe2048", 00:23:32.180 "ffdhe3072", 00:23:32.180 "ffdhe4096", 00:23:32.180 "ffdhe6144", 00:23:32.180 "ffdhe8192" 00:23:32.180 ] 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "bdev_nvme_set_hotplug", 00:23:32.180 "params": { 00:23:32.180 "period_us": 100000, 00:23:32.180 "enable": false 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "bdev_malloc_create", 00:23:32.180 "params": { 00:23:32.180 "name": "malloc0", 00:23:32.180 "num_blocks": 8192, 00:23:32.180 "block_size": 4096, 00:23:32.180 "physical_block_size": 4096, 00:23:32.180 "uuid": "522d696c-577b-4a1c-8c5c-d1d5d3303533", 00:23:32.180 "optimal_io_boundary": 0 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "bdev_wait_for_examine" 00:23:32.180 } 00:23:32.180 ] 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "subsystem": "nbd", 00:23:32.180 "config": [] 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "subsystem": "scheduler", 00:23:32.180 "config": [ 00:23:32.180 { 00:23:32.180 "method": "framework_set_scheduler", 00:23:32.180 "params": { 00:23:32.180 "name": "static" 00:23:32.180 } 00:23:32.180 } 00:23:32.180 ] 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "subsystem": "nvmf", 00:23:32.180 "config": [ 00:23:32.180 { 00:23:32.180 "method": "nvmf_set_config", 00:23:32.180 "params": { 00:23:32.180 "discovery_filter": "match_any", 00:23:32.180 "admin_cmd_passthru": { 00:23:32.180 "identify_ctrlr": false 00:23:32.180 } 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "nvmf_set_max_subsystems", 00:23:32.180 "params": { 00:23:32.180 "max_subsystems": 1024 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "nvmf_set_crdt", 00:23:32.180 "params": { 00:23:32.180 "crdt1": 0, 00:23:32.180 "crdt2": 0, 00:23:32.180 "crdt3": 0 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "nvmf_create_transport", 00:23:32.180 "params": { 00:23:32.180 "trtype": "TCP", 00:23:32.180 "max_queue_depth": 128, 00:23:32.180 "max_io_qpairs_per_ctrlr": 127, 00:23:32.180 "in_capsule_data_size": 4096, 00:23:32.180 "max_io_size": 131072, 00:23:32.180 "io_unit_size": 131072, 00:23:32.180 "max_aq_depth": 128, 00:23:32.180 "num_shared_buffers": 511, 00:23:32.180 "buf_cache_size": 4294967295, 00:23:32.180 "dif_insert_or_strip": false, 00:23:32.180 "zcopy": false, 00:23:32.180 "c2h_success": false, 00:23:32.180 "sock_priority": 0, 00:23:32.180 "abort_timeout_sec": 1, 00:23:32.180 "ack_timeout": 0, 00:23:32.180 "data_wr_pool_size": 0 00:23:32.180 } 00:23:32.180 }, 00:23:32.180 { 00:23:32.180 "method": "nvmf_create_subsystem", 00:23:32.180 "params": { 00:23:32.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.180 "allow_any_host": false, 00:23:32.180 "serial_number": "00000000000000000000", 00:23:32.181 "model_number": "SPDK bdev Controller", 00:23:32.181 "max_namespaces": 32, 00:23:32.181 "min_cntlid": 1, 00:23:32.181 "max_cntlid": 65519, 00:23:32.181 "ana_reporting": false 00:23:32.181 } 00:23:32.181 }, 00:23:32.181 { 00:23:32.181 "method": "nvmf_subsystem_add_host", 00:23:32.181 "params": { 00:23:32.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.181 "host": "nqn.2016-06.io.spdk:host1", 00:23:32.181 "psk": "key0" 00:23:32.181 } 00:23:32.181 }, 00:23:32.181 { 00:23:32.181 "method": "nvmf_subsystem_add_ns", 00:23:32.181 "params": { 00:23:32.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.181 "namespace": { 00:23:32.181 "nsid": 1, 00:23:32.181 "bdev_name": "malloc0", 00:23:32.181 "nguid": "522D696C577B4A1C8C5CD1D5D3303533", 00:23:32.181 "uuid": "522d696c-577b-4a1c-8c5c-d1d5d3303533", 00:23:32.181 "no_auto_visible": false 00:23:32.181 } 00:23:32.181 } 00:23:32.181 }, 00:23:32.181 { 00:23:32.181 "method": "nvmf_subsystem_add_listener", 00:23:32.181 "params": { 00:23:32.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.181 "listen_address": { 00:23:32.181 "trtype": "TCP", 00:23:32.181 "adrfam": "IPv4", 00:23:32.181 "traddr": "10.0.0.2", 00:23:32.181 "trsvcid": "4420" 00:23:32.181 }, 00:23:32.181 "secure_channel": true 00:23:32.181 } 00:23:32.181 } 00:23:32.181 ] 00:23:32.181 } 00:23:32.181 ] 00:23:32.181 }' 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2836055 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2836055 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2836055 ']' 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.181 04:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.181 [2024-07-14 04:39:52.302881] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:32.181 [2024-07-14 04:39:52.302973] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.181 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.440 [2024-07-14 04:39:52.378193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.440 [2024-07-14 04:39:52.472282] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.440 [2024-07-14 04:39:52.472354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.440 [2024-07-14 04:39:52.472370] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.440 [2024-07-14 04:39:52.472383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.440 [2024-07-14 04:39:52.472394] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.440 [2024-07-14 04:39:52.472488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.698 [2024-07-14 04:39:52.718656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.698 [2024-07-14 04:39:52.750664] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.698 [2024-07-14 04:39:52.770034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2836206 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2836206 /var/tmp/bdevperf.sock 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2836206 ']' 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.262 04:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:33.262 "subsystems": [ 00:23:33.262 { 00:23:33.262 "subsystem": "keyring", 00:23:33.262 "config": [ 00:23:33.262 { 00:23:33.262 "method": "keyring_file_add_key", 00:23:33.262 "params": { 00:23:33.262 "name": "key0", 00:23:33.262 "path": "/tmp/tmp.N0E2ujdMon" 00:23:33.262 } 00:23:33.262 } 00:23:33.262 ] 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "subsystem": "iobuf", 00:23:33.262 "config": [ 00:23:33.262 { 00:23:33.262 "method": "iobuf_set_options", 00:23:33.262 "params": { 00:23:33.262 "small_pool_count": 8192, 00:23:33.262 "large_pool_count": 1024, 00:23:33.262 "small_bufsize": 8192, 00:23:33.262 "large_bufsize": 135168 00:23:33.262 } 00:23:33.262 } 00:23:33.262 ] 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "subsystem": "sock", 00:23:33.262 "config": [ 00:23:33.262 { 00:23:33.262 "method": "sock_set_default_impl", 00:23:33.262 "params": { 00:23:33.262 "impl_name": "posix" 00:23:33.262 } 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "method": "sock_impl_set_options", 00:23:33.262 "params": { 00:23:33.262 "impl_name": "ssl", 00:23:33.262 "recv_buf_size": 4096, 00:23:33.262 "send_buf_size": 4096, 00:23:33.262 "enable_recv_pipe": true, 00:23:33.262 "enable_quickack": false, 00:23:33.262 "enable_placement_id": 0, 00:23:33.262 "enable_zerocopy_send_server": true, 00:23:33.262 "enable_zerocopy_send_client": false, 00:23:33.262 "zerocopy_threshold": 0, 00:23:33.262 "tls_version": 0, 00:23:33.262 "enable_ktls": false 00:23:33.262 } 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "method": "sock_impl_set_options", 00:23:33.262 "params": { 00:23:33.262 "impl_name": "posix", 00:23:33.262 "recv_buf_size": 2097152, 00:23:33.262 "send_buf_size": 2097152, 00:23:33.262 "enable_recv_pipe": true, 00:23:33.262 "enable_quickack": false, 00:23:33.262 "enable_placement_id": 0, 00:23:33.262 "enable_zerocopy_send_server": true, 00:23:33.262 "enable_zerocopy_send_client": false, 00:23:33.262 "zerocopy_threshold": 0, 00:23:33.262 "tls_version": 0, 00:23:33.262 "enable_ktls": false 00:23:33.262 } 00:23:33.262 } 00:23:33.262 ] 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "subsystem": "vmd", 00:23:33.262 "config": [] 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "subsystem": "accel", 00:23:33.262 "config": [ 00:23:33.262 { 00:23:33.262 "method": "accel_set_options", 00:23:33.262 "params": { 00:23:33.262 "small_cache_size": 128, 00:23:33.262 "large_cache_size": 16, 00:23:33.262 "task_count": 2048, 00:23:33.262 "sequence_count": 2048, 00:23:33.262 "buf_count": 2048 00:23:33.262 } 00:23:33.262 } 00:23:33.262 ] 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "subsystem": "bdev", 00:23:33.262 "config": [ 00:23:33.262 { 00:23:33.262 "method": "bdev_set_options", 00:23:33.262 "params": { 00:23:33.262 "bdev_io_pool_size": 65535, 00:23:33.262 "bdev_io_cache_size": 256, 00:23:33.262 "bdev_auto_examine": true, 00:23:33.262 "iobuf_small_cache_size": 128, 00:23:33.262 "iobuf_large_cache_size": 16 00:23:33.262 } 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "method": "bdev_raid_set_options", 00:23:33.262 "params": { 00:23:33.262 "process_window_size_kb": 1024 00:23:33.262 } 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "method": "bdev_iscsi_set_options", 00:23:33.262 "params": { 00:23:33.262 "timeout_sec": 30 00:23:33.262 } 00:23:33.262 }, 00:23:33.262 { 00:23:33.262 "method": "bdev_nvme_set_options", 00:23:33.262 "params": { 00:23:33.262 "action_on_timeout": "none", 00:23:33.262 "timeout_us": 0, 00:23:33.262 "timeout_admin_us": 0, 00:23:33.262 "keep_alive_timeout_ms": 10000, 00:23:33.262 "arbitration_burst": 0, 00:23:33.262 "low_priority_weight": 0, 00:23:33.262 "medium_priority_weight": 0, 00:23:33.262 "high_priority_weight": 0, 00:23:33.262 "nvme_adminq_poll_period_us": 10000, 00:23:33.262 "nvme_ioq_poll_period_us": 0, 00:23:33.262 "io_queue_requests": 512, 00:23:33.262 "delay_cmd_submit": true, 00:23:33.262 "transport_retry_count": 4, 00:23:33.262 "bdev_retry_count": 3, 00:23:33.262 "transport_ack_timeout": 0, 00:23:33.262 "ctrlr_loss_timeout_sec": 0, 00:23:33.262 "reconnect_delay_sec": 0, 00:23:33.263 "fast_io_fail_timeout_sec": 0, 00:23:33.263 "disable_auto_failback": false, 00:23:33.263 "generate_uuids": false, 00:23:33.263 "transport_tos": 0, 00:23:33.263 "nvme_error_stat": false, 00:23:33.263 "rdma_srq_size": 0, 00:23:33.263 "io_path_stat": false, 00:23:33.263 "allow_accel_sequence": false, 00:23:33.263 "rdma_max_cq_size": 0, 00:23:33.263 "rdma_cm_event_timeout_ms": 0, 00:23:33.263 "dhchap_digests": [ 00:23:33.263 "sha256", 00:23:33.263 "sha384", 00:23:33.263 "sha512" 00:23:33.263 ], 00:23:33.263 "dhchap_dhgroups": [ 00:23:33.263 "null", 00:23:33.263 "ffdhe2048", 00:23:33.263 "ffdhe3072", 00:23:33.263 "ffdhe4096", 00:23:33.263 "ffdhe6144", 00:23:33.263 "ffdhe8192" 00:23:33.263 ] 00:23:33.263 } 00:23:33.263 }, 00:23:33.263 { 00:23:33.263 "method": "bdev_nvme_attach_controller", 00:23:33.263 "params": { 00:23:33.263 "name": "nvme0", 00:23:33.263 "trtype": "TCP", 00:23:33.263 "adrfam": "IPv4", 00:23:33.263 "traddr": "10.0.0.2", 00:23:33.263 "trsvcid": "4420", 00:23:33.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.263 "prchk_reftag": false, 00:23:33.263 "prchk_guard": false, 00:23:33.263 "ctrlr_loss_timeout_sec": 0, 00:23:33.263 "reconnect_delay_sec": 0, 00:23:33.263 "fast_io_fail_timeout_sec": 0, 00:23:33.263 "psk": "key0", 00:23:33.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.263 "hdgst": false, 00:23:33.263 "ddgst": false 00:23:33.263 } 00:23:33.263 }, 00:23:33.263 { 00:23:33.263 "method": "bdev_nvme_set_hotplug", 00:23:33.263 "params": { 00:23:33.263 "period_us": 100000, 00:23:33.263 "enable": false 00:23:33.263 } 00:23:33.263 }, 00:23:33.263 { 00:23:33.263 "method": "bdev_enable_histogram", 00:23:33.263 "params": { 00:23:33.263 "name": "nvme0n1", 00:23:33.263 "enable": true 00:23:33.263 } 00:23:33.263 }, 00:23:33.263 { 00:23:33.263 "method": "bdev_wait_for_examine" 00:23:33.263 } 00:23:33.263 ] 00:23:33.263 }, 00:23:33.263 { 00:23:33.263 "subsystem": "nbd", 00:23:33.263 "config": [] 00:23:33.263 } 00:23:33.263 ] 00:23:33.263 }' 00:23:33.263 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.263 04:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.263 [2024-07-14 04:39:53.352598] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:33.263 [2024-07-14 04:39:53.352673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836206 ] 00:23:33.263 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.263 [2024-07-14 04:39:53.410237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.521 [2024-07-14 04:39:53.496965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.521 [2024-07-14 04:39:53.679628] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.455 04:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.455 04:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:34.455 04:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:34.455 04:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:34.455 04:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.455 04:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.713 Running I/O for 1 seconds... 00:23:35.646 00:23:35.646 Latency(us) 00:23:35.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.646 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.646 Verification LBA range: start 0x0 length 0x2000 00:23:35.646 nvme0n1 : 1.08 1409.57 5.51 0.00 0.00 88151.64 9660.49 120392.06 00:23:35.646 =================================================================================================================== 00:23:35.646 Total : 1409.57 5.51 0.00 0.00 88151.64 9660.49 120392.06 00:23:35.646 0 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:35.646 nvmf_trace.0 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2836206 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2836206 ']' 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2836206 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.646 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2836206 00:23:35.904 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:35.904 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:35.904 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2836206' 00:23:35.904 killing process with pid 2836206 00:23:35.904 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2836206 00:23:35.904 Received shutdown signal, test time was about 1.000000 seconds 00:23:35.904 00:23:35.904 Latency(us) 00:23:35.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.904 =================================================================================================================== 00:23:35.904 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.904 04:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2836206 00:23:35.904 04:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:35.904 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.904 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:35.904 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.904 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:35.904 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.904 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.161 rmmod nvme_tcp 00:23:36.161 rmmod nvme_fabrics 00:23:36.161 rmmod nvme_keyring 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2836055 ']' 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2836055 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2836055 ']' 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2836055 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2836055 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2836055' 00:23:36.161 killing process with pid 2836055 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2836055 00:23:36.161 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2836055 00:23:36.419 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.419 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.419 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.419 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.419 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.419 04:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.419 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.419 04:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.323 04:39:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.323 04:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.eB4v01BZ3u /tmp/tmp.e0NSleXl0I /tmp/tmp.N0E2ujdMon 00:23:38.323 00:23:38.323 real 1m18.784s 00:23:38.323 user 1m58.912s 00:23:38.323 sys 0m27.264s 00:23:38.323 04:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:38.323 04:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.323 ************************************ 00:23:38.323 END TEST nvmf_tls 00:23:38.323 ************************************ 00:23:38.323 04:39:58 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:38.323 04:39:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:38.323 04:39:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:38.323 04:39:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.323 ************************************ 00:23:38.323 START TEST nvmf_fips 00:23:38.323 ************************************ 00:23:38.323 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:38.581 * Looking for test storage... 00:23:38.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.581 04:39:58 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:38.582 Error setting digest 00:23:38.582 00F2E345F77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:38.582 00F2E345F77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.582 04:39:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:40.485 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:40.485 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:40.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:40.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:40.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:23:40.485 00:23:40.485 --- 10.0.0.2 ping statistics --- 00:23:40.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.485 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:23:40.485 00:23:40.485 --- 10.0.0.1 ping statistics --- 00:23:40.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.485 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:40.485 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2838446 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2838446 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2838446 ']' 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.744 04:40:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.744 [2024-07-14 04:40:00.772110] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:40.744 [2024-07-14 04:40:00.772236] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.744 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.744 [2024-07-14 04:40:00.843340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.744 [2024-07-14 04:40:00.929877] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.744 [2024-07-14 04:40:00.929941] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.744 [2024-07-14 04:40:00.929971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.744 [2024-07-14 04:40:00.929983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.744 [2024-07-14 04:40:00.929993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.744 [2024-07-14 04:40:00.930022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:41.003 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:41.261 [2024-07-14 04:40:01.345777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.261 [2024-07-14 04:40:01.361767] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.261 [2024-07-14 04:40:01.362004] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.261 [2024-07-14 04:40:01.394255] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:41.261 malloc0 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2838592 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2838592 /var/tmp/bdevperf.sock 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2838592 ']' 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:41.261 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:41.519 [2024-07-14 04:40:01.485558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:41.519 [2024-07-14 04:40:01.485635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838592 ] 00:23:41.519 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.519 [2024-07-14 04:40:01.542034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.519 [2024-07-14 04:40:01.625135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.777 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:41.777 04:40:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:41.777 04:40:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:42.035 [2024-07-14 04:40:01.981027] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.035 [2024-07-14 04:40:01.981158] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:42.035 TLSTESTn1 00:23:42.035 04:40:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.035 Running I/O for 10 seconds... 00:23:54.260 00:23:54.260 Latency(us) 00:23:54.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.260 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.260 Verification LBA range: start 0x0 length 0x2000 00:23:54.260 TLSTESTn1 : 10.07 1577.71 6.16 0.00 0.00 80861.56 6699.24 112624.83 00:23:54.260 =================================================================================================================== 00:23:54.260 Total : 1577.71 6.16 0.00 0.00 80861.56 6699.24 112624.83 00:23:54.260 0 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:54.260 nvmf_trace.0 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2838592 00:23:54.260 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2838592 ']' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2838592 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2838592 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2838592' 00:23:54.261 killing process with pid 2838592 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2838592 00:23:54.261 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.261 00:23:54.261 Latency(us) 00:23:54.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.261 =================================================================================================================== 00:23:54.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.261 [2024-07-14 04:40:12.377155] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2838592 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.261 rmmod nvme_tcp 00:23:54.261 rmmod nvme_fabrics 00:23:54.261 rmmod nvme_keyring 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2838446 ']' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2838446 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2838446 ']' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2838446 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2838446 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2838446' 00:23:54.261 killing process with pid 2838446 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2838446 00:23:54.261 [2024-07-14 04:40:12.695795] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2838446 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.261 04:40:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.826 04:40:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:54.826 04:40:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:54.826 00:23:54.826 real 0m16.491s 00:23:54.826 user 0m20.165s 00:23:54.826 sys 0m6.593s 00:23:54.826 04:40:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:54.826 04:40:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:54.826 ************************************ 00:23:54.826 END TEST nvmf_fips 00:23:54.826 ************************************ 00:23:54.826 04:40:15 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:54.826 04:40:15 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:54.826 04:40:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:54.826 04:40:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:54.826 04:40:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.085 ************************************ 00:23:55.085 START TEST nvmf_fuzz 00:23:55.085 ************************************ 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:55.085 * Looking for test storage... 00:23:55.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.085 04:40:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:56.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:56.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.984 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:56.985 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:56.985 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.985 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:23:57.243 00:23:57.243 --- 10.0.0.2 ping statistics --- 00:23:57.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.243 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:23:57.243 00:23:57.243 --- 10.0.0.1 ping statistics --- 00:23:57.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.243 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2841837 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2841837 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 2841837 ']' 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:57.243 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.502 Malloc0 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:57.502 04:40:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:29.560 Fuzzing completed. Shutting down the fuzz application 00:24:29.560 00:24:29.560 Dumping successful admin opcodes: 00:24:29.560 8, 9, 10, 24, 00:24:29.560 Dumping successful io opcodes: 00:24:29.560 0, 9, 00:24:29.560 NS: 0x200003aeff00 I/O qp, Total commands completed: 462751, total successful commands: 2674, random_seed: 2062173184 00:24:29.560 NS: 0x200003aeff00 admin qp, Total commands completed: 57152, total successful commands: 455, random_seed: 1689347072 00:24:29.560 04:40:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:29.560 Fuzzing completed. Shutting down the fuzz application 00:24:29.560 00:24:29.560 Dumping successful admin opcodes: 00:24:29.560 24, 00:24:29.560 Dumping successful io opcodes: 00:24:29.560 00:24:29.560 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3463860073 00:24:29.560 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3463983556 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.560 rmmod nvme_tcp 00:24:29.560 rmmod nvme_fabrics 00:24:29.560 rmmod nvme_keyring 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2841837 ']' 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2841837 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 2841837 ']' 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 2841837 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2841837 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2841837' 00:24:29.560 killing process with pid 2841837 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 2841837 00:24:29.560 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 2841837 00:24:29.818 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.818 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.818 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.818 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.818 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.819 04:40:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.819 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.819 04:40:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.719 04:40:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.719 04:40:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:31.719 00:24:31.719 real 0m36.826s 00:24:31.719 user 0m50.967s 00:24:31.719 sys 0m15.018s 00:24:31.719 04:40:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:31.719 04:40:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:31.719 ************************************ 00:24:31.719 END TEST nvmf_fuzz 00:24:31.719 ************************************ 00:24:31.719 04:40:51 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:31.719 04:40:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:31.719 04:40:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:31.719 04:40:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:31.719 ************************************ 00:24:31.978 START TEST nvmf_multiconnection 00:24:31.978 ************************************ 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:31.978 * Looking for test storage... 00:24:31.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.978 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.979 04:40:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:33.881 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:33.881 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:33.881 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:33.881 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.881 04:40:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.881 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.881 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.881 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.881 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:24:34.140 00:24:34.140 --- 10.0.0.2 ping statistics --- 00:24:34.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.140 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:34.140 00:24:34.140 --- 10.0.0.1 ping statistics --- 00:24:34.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.140 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2847437 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2847437 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 2847437 ']' 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:34.140 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.140 [2024-07-14 04:40:54.186515] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:34.140 [2024-07-14 04:40:54.186602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.140 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.140 [2024-07-14 04:40:54.248207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.398 [2024-07-14 04:40:54.342596] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.398 [2024-07-14 04:40:54.342648] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.398 [2024-07-14 04:40:54.342664] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.398 [2024-07-14 04:40:54.342677] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.398 [2024-07-14 04:40:54.342689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.398 [2024-07-14 04:40:54.342766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.398 [2024-07-14 04:40:54.342822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.398 [2024-07-14 04:40:54.342989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.398 [2024-07-14 04:40:54.342993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.398 [2024-07-14 04:40:54.494751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.398 Malloc1 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.398 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.399 [2024-07-14 04:40:54.551509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.399 Malloc2 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.399 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 Malloc3 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 Malloc4 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 Malloc5 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.657 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.658 Malloc6 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.658 Malloc7 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.658 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 Malloc8 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 Malloc9 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 Malloc10 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 Malloc11 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.917 04:40:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:35.507 04:40:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:35.507 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:35.507 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:35.507 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:35.507 04:40:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:38.070 04:40:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:38.070 04:40:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:38.070 04:40:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:38.070 04:40:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:38.070 04:40:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:38.070 04:40:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:38.070 04:40:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.070 04:40:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:38.329 04:40:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:38.329 04:40:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:38.329 04:40:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:38.329 04:40:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:38.329 04:40:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:40.867 04:41:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:40.867 04:41:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:40.867 04:41:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:40.867 04:41:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:40.867 04:41:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.867 04:41:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:40.867 04:41:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.867 04:41:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:41.126 04:41:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:41.126 04:41:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:41.126 04:41:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.126 04:41:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:41.126 04:41:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:43.031 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:43.031 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:43.031 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:43.031 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:43.031 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.031 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:43.031 04:41:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.031 04:41:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:43.978 04:41:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:43.979 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:43.979 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.979 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:43.979 04:41:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:45.885 04:41:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:45.885 04:41:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:45.885 04:41:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:45.885 04:41:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:45.885 04:41:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.885 04:41:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:45.885 04:41:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.885 04:41:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:46.452 04:41:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:46.452 04:41:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:46.452 04:41:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.452 04:41:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:46.452 04:41:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:48.985 04:41:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:48.985 04:41:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:48.985 04:41:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:48.985 04:41:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:48.985 04:41:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.985 04:41:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:48.985 04:41:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.985 04:41:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:49.244 04:41:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:49.244 04:41:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:49.244 04:41:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:49.244 04:41:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:49.244 04:41:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:51.784 04:41:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:51.784 04:41:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:51.784 04:41:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:51.784 04:41:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:51.784 04:41:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.784 04:41:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:51.784 04:41:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.784 04:41:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:52.351 04:41:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:52.351 04:41:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:52.351 04:41:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.351 04:41:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:52.351 04:41:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:54.256 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:54.256 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:54.256 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:54.256 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:54.256 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.256 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:54.256 04:41:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.256 04:41:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:54.825 04:41:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:54.825 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:54.825 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:54.825 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:54.825 04:41:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:57.355 04:41:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:57.355 04:41:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:57.355 04:41:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:24:57.355 04:41:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:57.355 04:41:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:57.355 04:41:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:57.355 04:41:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.355 04:41:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:57.921 04:41:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:57.921 04:41:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:57.921 04:41:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.921 04:41:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:57.921 04:41:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:59.832 04:41:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:59.832 04:41:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:59.832 04:41:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:24:59.832 04:41:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:59.832 04:41:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.832 04:41:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:59.832 04:41:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.832 04:41:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:00.774 04:41:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:00.774 04:41:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:00.774 04:41:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.774 04:41:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:00.774 04:41:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:02.673 04:41:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:02.673 04:41:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:02.673 04:41:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:02.673 04:41:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:02.674 04:41:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:02.674 04:41:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:02.674 04:41:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.674 04:41:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:03.608 04:41:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:03.608 04:41:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:03.608 04:41:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:03.608 04:41:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:03.608 04:41:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:05.568 04:41:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:05.568 04:41:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:05.568 04:41:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:05.568 04:41:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:05.568 04:41:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:05.568 04:41:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:05.568 04:41:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:05.568 [global] 00:25:05.568 thread=1 00:25:05.568 invalidate=1 00:25:05.568 rw=read 00:25:05.568 time_based=1 00:25:05.568 runtime=10 00:25:05.568 ioengine=libaio 00:25:05.568 direct=1 00:25:05.568 bs=262144 00:25:05.568 iodepth=64 00:25:05.568 norandommap=1 00:25:05.568 numjobs=1 00:25:05.568 00:25:05.568 [job0] 00:25:05.568 filename=/dev/nvme0n1 00:25:05.568 [job1] 00:25:05.568 filename=/dev/nvme10n1 00:25:05.568 [job2] 00:25:05.568 filename=/dev/nvme1n1 00:25:05.568 [job3] 00:25:05.568 filename=/dev/nvme2n1 00:25:05.568 [job4] 00:25:05.568 filename=/dev/nvme3n1 00:25:05.568 [job5] 00:25:05.568 filename=/dev/nvme4n1 00:25:05.568 [job6] 00:25:05.568 filename=/dev/nvme5n1 00:25:05.568 [job7] 00:25:05.568 filename=/dev/nvme6n1 00:25:05.568 [job8] 00:25:05.568 filename=/dev/nvme7n1 00:25:05.568 [job9] 00:25:05.568 filename=/dev/nvme8n1 00:25:05.568 [job10] 00:25:05.568 filename=/dev/nvme9n1 00:25:05.832 Could not set queue depth (nvme0n1) 00:25:05.832 Could not set queue depth (nvme10n1) 00:25:05.832 Could not set queue depth (nvme1n1) 00:25:05.832 Could not set queue depth (nvme2n1) 00:25:05.832 Could not set queue depth (nvme3n1) 00:25:05.832 Could not set queue depth (nvme4n1) 00:25:05.832 Could not set queue depth (nvme5n1) 00:25:05.832 Could not set queue depth (nvme6n1) 00:25:05.832 Could not set queue depth (nvme7n1) 00:25:05.832 Could not set queue depth (nvme8n1) 00:25:05.832 Could not set queue depth (nvme9n1) 00:25:05.832 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.832 fio-3.35 00:25:05.832 Starting 11 threads 00:25:18.039 00:25:18.039 job0: (groupid=0, jobs=1): err= 0: pid=2851684: Sun Jul 14 04:41:36 2024 00:25:18.039 read: IOPS=537, BW=134MiB/s (141MB/s)(1358MiB/10117msec) 00:25:18.039 slat (usec): min=9, max=110683, avg=1223.93, stdev=5260.86 00:25:18.039 clat (msec): min=5, max=255, avg=117.85, stdev=50.89 00:25:18.039 lat (msec): min=5, max=281, avg=119.07, stdev=51.60 00:25:18.039 clat percentiles (msec): 00:25:18.039 | 1.00th=[ 15], 5.00th=[ 43], 10.00th=[ 55], 20.00th=[ 70], 00:25:18.039 | 30.00th=[ 84], 40.00th=[ 102], 50.00th=[ 116], 60.00th=[ 132], 00:25:18.039 | 70.00th=[ 148], 80.00th=[ 165], 90.00th=[ 184], 95.00th=[ 199], 00:25:18.039 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 255], 99.95th=[ 255], 00:25:18.039 | 99.99th=[ 255] 00:25:18.039 bw ( KiB/s): min=88576, max=233472, per=7.93%, avg=137421.80, stdev=40809.14, samples=20 00:25:18.039 iops : min= 346, max= 912, avg=536.70, stdev=159.37, samples=20 00:25:18.039 lat (msec) : 10=0.29%, 20=1.58%, 50=5.80%, 100=31.44%, 250=60.52% 00:25:18.039 lat (msec) : 500=0.37% 00:25:18.039 cpu : usr=0.24%, sys=1.68%, ctx=1457, majf=0, minf=4097 00:25:18.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:18.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.039 issued rwts: total=5433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.039 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.039 job1: (groupid=0, jobs=1): err= 0: pid=2851685: Sun Jul 14 04:41:36 2024 00:25:18.039 read: IOPS=782, BW=196MiB/s (205MB/s)(1962MiB/10024msec) 00:25:18.039 slat (usec): min=14, max=108452, avg=1104.25, stdev=3917.76 00:25:18.040 clat (msec): min=3, max=264, avg=80.59, stdev=43.15 00:25:18.040 lat (msec): min=3, max=264, avg=81.69, stdev=43.64 00:25:18.040 clat percentiles (msec): 00:25:18.040 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 37], 20.00th=[ 42], 00:25:18.040 | 30.00th=[ 52], 40.00th=[ 65], 50.00th=[ 73], 60.00th=[ 83], 00:25:18.040 | 70.00th=[ 96], 80.00th=[ 112], 90.00th=[ 144], 95.00th=[ 163], 00:25:18.040 | 99.00th=[ 205], 99.50th=[ 245], 99.90th=[ 259], 99.95th=[ 259], 00:25:18.040 | 99.99th=[ 266] 00:25:18.040 bw ( KiB/s): min=108544, max=343214, per=11.50%, avg=199266.90, stdev=54185.04, samples=20 00:25:18.040 iops : min= 424, max= 1340, avg=778.30, stdev=211.59, samples=20 00:25:18.040 lat (msec) : 4=0.01%, 10=0.97%, 20=1.72%, 50=26.77%, 100=43.47% 00:25:18.040 lat (msec) : 250=26.66%, 500=0.40% 00:25:18.040 cpu : usr=0.56%, sys=2.72%, ctx=1709, majf=0, minf=4097 00:25:18.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:18.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.040 issued rwts: total=7847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.040 job2: (groupid=0, jobs=1): err= 0: pid=2851686: Sun Jul 14 04:41:36 2024 00:25:18.040 read: IOPS=533, BW=133MiB/s (140MB/s)(1348MiB/10112msec) 00:25:18.040 slat (usec): min=10, max=132160, avg=1561.56, stdev=6149.92 00:25:18.040 clat (msec): min=4, max=309, avg=118.35, stdev=64.15 00:25:18.040 lat (msec): min=4, max=316, avg=119.91, stdev=65.29 00:25:18.040 clat percentiles (msec): 00:25:18.040 | 1.00th=[ 11], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 48], 00:25:18.040 | 30.00th=[ 68], 40.00th=[ 107], 50.00th=[ 132], 60.00th=[ 142], 00:25:18.040 | 70.00th=[ 159], 80.00th=[ 174], 90.00th=[ 194], 95.00th=[ 213], 00:25:18.040 | 99.00th=[ 275], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 309], 00:25:18.040 | 99.99th=[ 309] 00:25:18.040 bw ( KiB/s): min=69120, max=297900, per=7.87%, avg=136440.90, stdev=57751.97, samples=20 00:25:18.040 iops : min= 270, max= 1163, avg=532.90, stdev=225.43, samples=20 00:25:18.040 lat (msec) : 10=0.69%, 20=5.47%, 50=15.28%, 100=17.23%, 250=59.43% 00:25:18.040 lat (msec) : 500=1.91% 00:25:18.040 cpu : usr=0.26%, sys=1.91%, ctx=1345, majf=0, minf=4097 00:25:18.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:18.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.040 issued rwts: total=5393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.040 job3: (groupid=0, jobs=1): err= 0: pid=2851687: Sun Jul 14 04:41:36 2024 00:25:18.040 read: IOPS=535, BW=134MiB/s (140MB/s)(1355MiB/10121msec) 00:25:18.040 slat (usec): min=9, max=87504, avg=1335.08, stdev=5634.08 00:25:18.040 clat (usec): min=1113, max=315349, avg=118031.40, stdev=61739.50 00:25:18.040 lat (usec): min=1145, max=315366, avg=119366.48, stdev=62687.46 00:25:18.040 clat percentiles (msec): 00:25:18.040 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 35], 20.00th=[ 63], 00:25:18.040 | 30.00th=[ 80], 40.00th=[ 93], 50.00th=[ 112], 60.00th=[ 142], 00:25:18.040 | 70.00th=[ 159], 80.00th=[ 174], 90.00th=[ 194], 95.00th=[ 215], 00:25:18.040 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 317], 99.95th=[ 317], 00:25:18.040 | 99.99th=[ 317] 00:25:18.040 bw ( KiB/s): min=72192, max=267264, per=7.91%, avg=137120.15, stdev=53093.62, samples=20 00:25:18.040 iops : min= 282, max= 1044, avg=535.60, stdev=207.37, samples=20 00:25:18.040 lat (msec) : 2=0.02%, 4=0.22%, 10=2.07%, 20=2.99%, 50=8.85% 00:25:18.040 lat (msec) : 100=29.79%, 250=54.36%, 500=1.70% 00:25:18.040 cpu : usr=0.37%, sys=1.62%, ctx=1431, majf=0, minf=4097 00:25:18.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:18.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.040 issued rwts: total=5421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.040 job4: (groupid=0, jobs=1): err= 0: pid=2851688: Sun Jul 14 04:41:36 2024 00:25:18.040 read: IOPS=487, BW=122MiB/s (128MB/s)(1234MiB/10119msec) 00:25:18.040 slat (usec): min=9, max=91992, avg=1045.57, stdev=4791.55 00:25:18.040 clat (msec): min=2, max=316, avg=130.01, stdev=53.73 00:25:18.040 lat (msec): min=2, max=316, avg=131.06, stdev=54.20 00:25:18.040 clat percentiles (msec): 00:25:18.040 | 1.00th=[ 11], 5.00th=[ 38], 10.00th=[ 63], 20.00th=[ 86], 00:25:18.040 | 30.00th=[ 100], 40.00th=[ 115], 50.00th=[ 132], 60.00th=[ 146], 00:25:18.040 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 199], 95.00th=[ 222], 00:25:18.040 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 292], 00:25:18.040 | 99.99th=[ 317] 00:25:18.040 bw ( KiB/s): min=72192, max=204902, per=7.20%, avg=124744.00, stdev=32116.94, samples=20 00:25:18.040 iops : min= 282, max= 800, avg=487.25, stdev=125.41, samples=20 00:25:18.040 lat (msec) : 4=0.06%, 10=0.89%, 20=1.66%, 50=3.52%, 100=24.33% 00:25:18.040 lat (msec) : 250=67.35%, 500=2.19% 00:25:18.040 cpu : usr=0.24%, sys=1.39%, ctx=1538, majf=0, minf=4097 00:25:18.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:18.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.040 issued rwts: total=4937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.040 job5: (groupid=0, jobs=1): err= 0: pid=2851689: Sun Jul 14 04:41:36 2024 00:25:18.040 read: IOPS=733, BW=183MiB/s (192MB/s)(1845MiB/10065msec) 00:25:18.040 slat (usec): min=9, max=141717, avg=883.53, stdev=4590.97 00:25:18.040 clat (msec): min=2, max=339, avg=86.36, stdev=58.70 00:25:18.040 lat (msec): min=2, max=339, avg=87.24, stdev=59.45 00:25:18.040 clat percentiles (msec): 00:25:18.040 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 33], 00:25:18.040 | 30.00th=[ 40], 40.00th=[ 58], 50.00th=[ 75], 60.00th=[ 96], 00:25:18.040 | 70.00th=[ 118], 80.00th=[ 142], 90.00th=[ 169], 95.00th=[ 197], 00:25:18.040 | 99.00th=[ 230], 99.50th=[ 239], 99.90th=[ 257], 99.95th=[ 259], 00:25:18.040 | 99.99th=[ 338] 00:25:18.040 bw ( KiB/s): min=86016, max=337920, per=10.81%, avg=187278.60, stdev=81237.15, samples=20 00:25:18.040 iops : min= 336, max= 1320, avg=731.55, stdev=317.34, samples=20 00:25:18.040 lat (msec) : 4=0.34%, 10=4.16%, 20=5.66%, 50=26.97%, 100=24.89% 00:25:18.040 lat (msec) : 250=37.74%, 500=0.23% 00:25:18.040 cpu : usr=0.40%, sys=2.34%, ctx=1901, majf=0, minf=4097 00:25:18.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:18.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.040 issued rwts: total=7379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.040 job6: (groupid=0, jobs=1): err= 0: pid=2851690: Sun Jul 14 04:41:36 2024 00:25:18.040 read: IOPS=857, BW=214MiB/s (225MB/s)(2157MiB/10065msec) 00:25:18.040 slat (usec): min=10, max=59571, avg=959.47, stdev=3000.68 00:25:18.040 clat (msec): min=3, max=309, avg=73.66, stdev=37.83 00:25:18.040 lat (msec): min=3, max=350, avg=74.62, stdev=38.19 00:25:18.040 clat percentiles (msec): 00:25:18.040 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 44], 00:25:18.040 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 66], 60.00th=[ 74], 00:25:18.040 | 70.00th=[ 85], 80.00th=[ 103], 90.00th=[ 121], 95.00th=[ 136], 00:25:18.040 | 99.00th=[ 215], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 305], 00:25:18.040 | 99.99th=[ 309] 00:25:18.040 bw ( KiB/s): min=127488, max=359424, per=12.65%, avg=219168.40, stdev=72368.39, samples=20 00:25:18.040 iops : min= 498, max= 1404, avg=856.05, stdev=282.68, samples=20 00:25:18.041 lat (msec) : 4=0.07%, 10=1.07%, 20=1.31%, 50=27.10%, 100=49.70% 00:25:18.041 lat (msec) : 250=20.11%, 500=0.64% 00:25:18.041 cpu : usr=0.53%, sys=2.87%, ctx=1917, majf=0, minf=3721 00:25:18.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:18.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.041 issued rwts: total=8626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.041 job7: (groupid=0, jobs=1): err= 0: pid=2851691: Sun Jul 14 04:41:36 2024 00:25:18.041 read: IOPS=588, BW=147MiB/s (154MB/s)(1490MiB/10121msec) 00:25:18.041 slat (usec): min=9, max=144555, avg=813.74, stdev=4981.49 00:25:18.041 clat (msec): min=2, max=310, avg=107.80, stdev=60.30 00:25:18.041 lat (msec): min=2, max=400, avg=108.61, stdev=61.00 00:25:18.041 clat percentiles (msec): 00:25:18.041 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 30], 20.00th=[ 54], 00:25:18.041 | 30.00th=[ 72], 40.00th=[ 87], 50.00th=[ 102], 60.00th=[ 116], 00:25:18.041 | 70.00th=[ 142], 80.00th=[ 163], 90.00th=[ 186], 95.00th=[ 213], 00:25:18.041 | 99.00th=[ 257], 99.50th=[ 271], 99.90th=[ 300], 99.95th=[ 305], 00:25:18.041 | 99.99th=[ 313] 00:25:18.041 bw ( KiB/s): min=86016, max=226816, per=8.71%, avg=150903.75, stdev=45036.45, samples=20 00:25:18.041 iops : min= 336, max= 886, avg=589.40, stdev=175.95, samples=20 00:25:18.041 lat (msec) : 4=0.17%, 10=1.46%, 20=5.32%, 50=12.32%, 100=30.07% 00:25:18.041 lat (msec) : 250=49.00%, 500=1.66% 00:25:18.041 cpu : usr=0.28%, sys=1.58%, ctx=1813, majf=0, minf=4097 00:25:18.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:18.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.041 issued rwts: total=5959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.041 job8: (groupid=0, jobs=1): err= 0: pid=2851692: Sun Jul 14 04:41:36 2024 00:25:18.041 read: IOPS=713, BW=178MiB/s (187MB/s)(1787MiB/10019msec) 00:25:18.041 slat (usec): min=12, max=159209, avg=1198.86, stdev=4709.84 00:25:18.041 clat (msec): min=2, max=316, avg=88.44, stdev=53.32 00:25:18.041 lat (msec): min=2, max=321, avg=89.64, stdev=53.91 00:25:18.041 clat percentiles (msec): 00:25:18.041 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 41], 00:25:18.041 | 30.00th=[ 52], 40.00th=[ 63], 50.00th=[ 77], 60.00th=[ 91], 00:25:18.041 | 70.00th=[ 108], 80.00th=[ 136], 90.00th=[ 159], 95.00th=[ 180], 00:25:18.041 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 317], 00:25:18.041 | 99.99th=[ 317] 00:25:18.041 bw ( KiB/s): min=65667, max=366080, per=10.47%, avg=181391.40, stdev=86586.95, samples=20 00:25:18.041 iops : min= 256, max= 1430, avg=708.50, stdev=338.27, samples=20 00:25:18.041 lat (msec) : 4=0.04%, 10=0.77%, 20=0.92%, 50=27.60%, 100=36.42% 00:25:18.041 lat (msec) : 250=32.61%, 500=1.64% 00:25:18.041 cpu : usr=0.38%, sys=2.61%, ctx=1590, majf=0, minf=4097 00:25:18.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:18.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.041 issued rwts: total=7149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.041 job9: (groupid=0, jobs=1): err= 0: pid=2851693: Sun Jul 14 04:41:36 2024 00:25:18.041 read: IOPS=467, BW=117MiB/s (123MB/s)(1176MiB/10065msec) 00:25:18.041 slat (usec): min=12, max=72354, avg=1890.56, stdev=5501.19 00:25:18.041 clat (msec): min=22, max=326, avg=134.92, stdev=48.73 00:25:18.041 lat (msec): min=22, max=326, avg=136.81, stdev=49.57 00:25:18.041 clat percentiles (msec): 00:25:18.041 | 1.00th=[ 45], 5.00th=[ 66], 10.00th=[ 73], 20.00th=[ 89], 00:25:18.041 | 30.00th=[ 102], 40.00th=[ 120], 50.00th=[ 136], 60.00th=[ 148], 00:25:18.041 | 70.00th=[ 161], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 215], 00:25:18.041 | 99.00th=[ 279], 99.50th=[ 296], 99.90th=[ 309], 99.95th=[ 317], 00:25:18.041 | 99.99th=[ 326] 00:25:18.041 bw ( KiB/s): min=64000, max=205312, per=6.86%, avg=118790.40, stdev=33675.76, samples=20 00:25:18.041 iops : min= 250, max= 802, avg=464.00, stdev=131.57, samples=20 00:25:18.041 lat (msec) : 50=1.34%, 100=28.08%, 250=69.01%, 500=1.57% 00:25:18.041 cpu : usr=0.38%, sys=1.64%, ctx=1137, majf=0, minf=4097 00:25:18.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:18.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.041 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.041 job10: (groupid=0, jobs=1): err= 0: pid=2851694: Sun Jul 14 04:41:36 2024 00:25:18.041 read: IOPS=561, BW=140MiB/s (147MB/s)(1420MiB/10124msec) 00:25:18.041 slat (usec): min=14, max=121399, avg=1685.81, stdev=5209.08 00:25:18.041 clat (msec): min=3, max=353, avg=112.25, stdev=49.65 00:25:18.041 lat (msec): min=3, max=353, avg=113.93, stdev=50.41 00:25:18.041 clat percentiles (msec): 00:25:18.041 | 1.00th=[ 13], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 70], 00:25:18.041 | 30.00th=[ 79], 40.00th=[ 88], 50.00th=[ 106], 60.00th=[ 123], 00:25:18.041 | 70.00th=[ 136], 80.00th=[ 150], 90.00th=[ 178], 95.00th=[ 211], 00:25:18.041 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 275], 99.95th=[ 338], 00:25:18.041 | 99.99th=[ 355] 00:25:18.041 bw ( KiB/s): min=67584, max=218112, per=8.30%, avg=143782.80, stdev=49472.83, samples=20 00:25:18.041 iops : min= 264, max= 852, avg=561.60, stdev=193.21, samples=20 00:25:18.041 lat (msec) : 4=0.05%, 10=0.74%, 20=0.92%, 50=3.56%, 100=42.60% 00:25:18.041 lat (msec) : 250=51.12%, 500=1.02% 00:25:18.041 cpu : usr=0.27%, sys=2.13%, ctx=1272, majf=0, minf=4097 00:25:18.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:18.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:18.041 issued rwts: total=5681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:18.041 00:25:18.041 Run status group 0 (all jobs): 00:25:18.041 READ: bw=1692MiB/s (1774MB/s), 117MiB/s-214MiB/s (123MB/s-225MB/s), io=16.7GiB (18.0GB), run=10019-10124msec 00:25:18.041 00:25:18.041 Disk stats (read/write): 00:25:18.041 nvme0n1: ios=10704/0, merge=0/0, ticks=1236067/0, in_queue=1236067, util=97.25% 00:25:18.041 nvme10n1: ios=15417/0, merge=0/0, ticks=1234395/0, in_queue=1234395, util=97.45% 00:25:18.041 nvme1n1: ios=10614/0, merge=0/0, ticks=1225655/0, in_queue=1225655, util=97.71% 00:25:18.041 nvme2n1: ios=10684/0, merge=0/0, ticks=1233297/0, in_queue=1233297, util=97.83% 00:25:18.041 nvme3n1: ios=9734/0, merge=0/0, ticks=1241977/0, in_queue=1241977, util=97.92% 00:25:18.041 nvme4n1: ios=14526/0, merge=0/0, ticks=1237657/0, in_queue=1237657, util=98.26% 00:25:18.041 nvme5n1: ios=17023/0, merge=0/0, ticks=1236472/0, in_queue=1236472, util=98.43% 00:25:18.041 nvme6n1: ios=11740/0, merge=0/0, ticks=1237310/0, in_queue=1237310, util=98.52% 00:25:18.041 nvme7n1: ios=14021/0, merge=0/0, ticks=1237320/0, in_queue=1237320, util=98.90% 00:25:18.042 nvme8n1: ios=9207/0, merge=0/0, ticks=1225989/0, in_queue=1225989, util=99.10% 00:25:18.042 nvme9n1: ios=11185/0, merge=0/0, ticks=1225383/0, in_queue=1225383, util=99.22% 00:25:18.042 04:41:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:18.042 [global] 00:25:18.042 thread=1 00:25:18.042 invalidate=1 00:25:18.042 rw=randwrite 00:25:18.042 time_based=1 00:25:18.042 runtime=10 00:25:18.042 ioengine=libaio 00:25:18.042 direct=1 00:25:18.042 bs=262144 00:25:18.042 iodepth=64 00:25:18.042 norandommap=1 00:25:18.042 numjobs=1 00:25:18.042 00:25:18.042 [job0] 00:25:18.042 filename=/dev/nvme0n1 00:25:18.042 [job1] 00:25:18.042 filename=/dev/nvme10n1 00:25:18.042 [job2] 00:25:18.042 filename=/dev/nvme1n1 00:25:18.042 [job3] 00:25:18.042 filename=/dev/nvme2n1 00:25:18.042 [job4] 00:25:18.042 filename=/dev/nvme3n1 00:25:18.042 [job5] 00:25:18.042 filename=/dev/nvme4n1 00:25:18.042 [job6] 00:25:18.042 filename=/dev/nvme5n1 00:25:18.042 [job7] 00:25:18.042 filename=/dev/nvme6n1 00:25:18.042 [job8] 00:25:18.042 filename=/dev/nvme7n1 00:25:18.042 [job9] 00:25:18.042 filename=/dev/nvme8n1 00:25:18.042 [job10] 00:25:18.042 filename=/dev/nvme9n1 00:25:18.042 Could not set queue depth (nvme0n1) 00:25:18.042 Could not set queue depth (nvme10n1) 00:25:18.042 Could not set queue depth (nvme1n1) 00:25:18.042 Could not set queue depth (nvme2n1) 00:25:18.042 Could not set queue depth (nvme3n1) 00:25:18.042 Could not set queue depth (nvme4n1) 00:25:18.042 Could not set queue depth (nvme5n1) 00:25:18.042 Could not set queue depth (nvme6n1) 00:25:18.042 Could not set queue depth (nvme7n1) 00:25:18.042 Could not set queue depth (nvme8n1) 00:25:18.042 Could not set queue depth (nvme9n1) 00:25:18.042 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.042 fio-3.35 00:25:18.042 Starting 11 threads 00:25:28.019 00:25:28.019 job0: (groupid=0, jobs=1): err= 0: pid=2852862: Sun Jul 14 04:41:47 2024 00:25:28.019 write: IOPS=439, BW=110MiB/s (115MB/s)(1111MiB/10114msec); 0 zone resets 00:25:28.019 slat (usec): min=19, max=867915, avg=1574.56, stdev=13784.70 00:25:28.019 clat (msec): min=2, max=1148, avg=143.95, stdev=122.22 00:25:28.019 lat (msec): min=5, max=1148, avg=145.53, stdev=123.12 00:25:28.019 clat percentiles (msec): 00:25:28.019 | 1.00th=[ 14], 5.00th=[ 23], 10.00th=[ 39], 20.00th=[ 91], 00:25:28.019 | 30.00th=[ 113], 40.00th=[ 133], 50.00th=[ 146], 60.00th=[ 155], 00:25:28.019 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 192], 95.00th=[ 205], 00:25:28.019 | 99.00th=[ 1053], 99.50th=[ 1083], 99.90th=[ 1150], 99.95th=[ 1150], 00:25:28.019 | 99.99th=[ 1150] 00:25:28.019 bw ( KiB/s): min= 1536, max=236032, per=9.70%, avg=112131.45, stdev=48352.69, samples=20 00:25:28.019 iops : min= 6, max= 922, avg=438.00, stdev=188.88, samples=20 00:25:28.019 lat (msec) : 4=0.02%, 10=0.47%, 20=2.66%, 50=9.43%, 100=11.93% 00:25:28.019 lat (msec) : 250=74.01%, 500=0.07%, 1000=0.34%, 2000=1.08% 00:25:28.019 cpu : usr=1.23%, sys=1.56%, ctx=2546, majf=0, minf=1 00:25:28.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:28.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.019 issued rwts: total=0,4444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.019 job1: (groupid=0, jobs=1): err= 0: pid=2852863: Sun Jul 14 04:41:47 2024 00:25:28.019 write: IOPS=311, BW=77.9MiB/s (81.7MB/s)(791MiB/10145msec); 0 zone resets 00:25:28.019 slat (usec): min=21, max=786905, avg=2722.93, stdev=16945.46 00:25:28.019 clat (usec): min=1904, max=1071.8k, avg=202428.98, stdev=183408.88 00:25:28.019 lat (usec): min=1967, max=1084.0k, avg=205151.91, stdev=185131.80 00:25:28.019 clat percentiles (msec): 00:25:28.019 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 71], 20.00th=[ 97], 00:25:28.019 | 30.00th=[ 117], 40.00th=[ 155], 50.00th=[ 171], 60.00th=[ 188], 00:25:28.019 | 70.00th=[ 215], 80.00th=[ 234], 90.00th=[ 292], 95.00th=[ 667], 00:25:28.019 | 99.00th=[ 1003], 99.50th=[ 1045], 99.90th=[ 1070], 99.95th=[ 1070], 00:25:28.019 | 99.99th=[ 1070] 00:25:28.019 bw ( KiB/s): min=11264, max=152782, per=6.86%, avg=79317.90, stdev=42643.53, samples=20 00:25:28.019 iops : min= 44, max= 596, avg=309.75, stdev=166.47, samples=20 00:25:28.019 lat (msec) : 2=0.03%, 4=0.16%, 10=0.89%, 20=2.02%, 50=3.00% 00:25:28.019 lat (msec) : 100=17.08%, 250=63.00%, 500=7.62%, 750=2.66%, 1000=2.50% 00:25:28.019 lat (msec) : 2000=1.04% 00:25:28.019 cpu : usr=0.92%, sys=1.12%, ctx=1508, majf=0, minf=1 00:25:28.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:28.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.019 issued rwts: total=0,3162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.019 job2: (groupid=0, jobs=1): err= 0: pid=2852864: Sun Jul 14 04:41:47 2024 00:25:28.019 write: IOPS=424, BW=106MiB/s (111MB/s)(1074MiB/10125msec); 0 zone resets 00:25:28.019 slat (usec): min=19, max=68968, avg=2101.70, stdev=4214.96 00:25:28.019 clat (msec): min=14, max=303, avg=148.74, stdev=43.09 00:25:28.019 lat (msec): min=14, max=303, avg=150.84, stdev=43.54 00:25:28.019 clat percentiles (msec): 00:25:28.019 | 1.00th=[ 33], 5.00th=[ 75], 10.00th=[ 106], 20.00th=[ 123], 00:25:28.019 | 30.00th=[ 131], 40.00th=[ 138], 50.00th=[ 146], 60.00th=[ 155], 00:25:28.019 | 70.00th=[ 165], 80.00th=[ 180], 90.00th=[ 199], 95.00th=[ 220], 00:25:28.019 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 305], 00:25:28.019 | 99.99th=[ 305] 00:25:28.019 bw ( KiB/s): min=63488, max=133120, per=9.37%, avg=108304.80, stdev=19734.33, samples=20 00:25:28.019 iops : min= 248, max= 520, avg=423.00, stdev=77.17, samples=20 00:25:28.019 lat (msec) : 20=0.09%, 50=2.68%, 100=5.73%, 250=89.24%, 500=2.26% 00:25:28.019 cpu : usr=1.21%, sys=1.43%, ctx=1543, majf=0, minf=1 00:25:28.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:28.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.019 issued rwts: total=0,4294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.019 job3: (groupid=0, jobs=1): err= 0: pid=2852865: Sun Jul 14 04:41:47 2024 00:25:28.019 write: IOPS=399, BW=100.0MiB/s (105MB/s)(1010MiB/10098msec); 0 zone resets 00:25:28.019 slat (usec): min=19, max=156799, avg=1847.24, stdev=6404.17 00:25:28.019 clat (usec): min=1421, max=1092.0k, avg=158071.24, stdev=149853.57 00:25:28.019 lat (usec): min=1462, max=1092.1k, avg=159918.48, stdev=150760.68 00:25:28.019 clat percentiles (msec): 00:25:28.019 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 62], 20.00th=[ 91], 00:25:28.019 | 30.00th=[ 114], 40.00th=[ 126], 50.00th=[ 140], 60.00th=[ 155], 00:25:28.019 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 213], 95.00th=[ 279], 00:25:28.019 | 99.00th=[ 1020], 99.50th=[ 1036], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:28.019 | 99.99th=[ 1099] 00:25:28.020 bw ( KiB/s): min= 8192, max=164864, per=8.80%, avg=101742.80, stdev=41284.99, samples=20 00:25:28.020 iops : min= 32, max= 644, avg=397.40, stdev=161.28, samples=20 00:25:28.020 lat (msec) : 2=0.10%, 4=0.67%, 10=1.78%, 20=2.38%, 50=4.06% 00:25:28.020 lat (msec) : 100=14.07%, 250=71.35%, 500=2.63%, 750=0.40%, 1000=1.21% 00:25:28.020 lat (msec) : 2000=1.36% 00:25:28.020 cpu : usr=1.27%, sys=1.34%, ctx=1972, majf=0, minf=1 00:25:28.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:28.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.020 issued rwts: total=0,4038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.020 job4: (groupid=0, jobs=1): err= 0: pid=2852866: Sun Jul 14 04:41:47 2024 00:25:28.020 write: IOPS=460, BW=115MiB/s (121MB/s)(1169MiB/10164msec); 0 zone resets 00:25:28.020 slat (usec): min=25, max=175388, avg=2026.49, stdev=6484.77 00:25:28.020 clat (msec): min=15, max=912, avg=137.00, stdev=99.57 00:25:28.020 lat (msec): min=15, max=912, avg=139.03, stdev=100.43 00:25:28.020 clat percentiles (msec): 00:25:28.020 | 1.00th=[ 59], 5.00th=[ 64], 10.00th=[ 67], 20.00th=[ 77], 00:25:28.020 | 30.00th=[ 83], 40.00th=[ 97], 50.00th=[ 110], 60.00th=[ 132], 00:25:28.020 | 70.00th=[ 161], 80.00th=[ 192], 90.00th=[ 215], 95.00th=[ 230], 00:25:28.020 | 99.00th=[ 852], 99.50th=[ 885], 99.90th=[ 902], 99.95th=[ 902], 00:25:28.020 | 99.99th=[ 911] 00:25:28.020 bw ( KiB/s): min=18944, max=222720, per=10.22%, avg=118085.40, stdev=58173.22, samples=20 00:25:28.020 iops : min= 74, max= 870, avg=461.25, stdev=227.26, samples=20 00:25:28.020 lat (msec) : 20=0.09%, 50=0.38%, 100=41.47%, 250=54.68%, 500=2.03% 00:25:28.020 lat (msec) : 750=0.34%, 1000=1.01% 00:25:28.020 cpu : usr=1.51%, sys=1.42%, ctx=1369, majf=0, minf=1 00:25:28.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:28.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.020 issued rwts: total=0,4676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.020 job5: (groupid=0, jobs=1): err= 0: pid=2852878: Sun Jul 14 04:41:47 2024 00:25:28.020 write: IOPS=381, BW=95.4MiB/s (100MB/s)(969MiB/10151msec); 0 zone resets 00:25:28.020 slat (usec): min=16, max=100667, avg=1816.79, stdev=4881.17 00:25:28.020 clat (msec): min=3, max=1137, avg=165.81, stdev=146.57 00:25:28.020 lat (msec): min=3, max=1137, avg=167.63, stdev=147.04 00:25:28.020 clat percentiles (msec): 00:25:28.020 | 1.00th=[ 15], 5.00th=[ 45], 10.00th=[ 58], 20.00th=[ 77], 00:25:28.020 | 30.00th=[ 96], 40.00th=[ 131], 50.00th=[ 148], 60.00th=[ 174], 00:25:28.020 | 70.00th=[ 186], 80.00th=[ 201], 90.00th=[ 241], 95.00th=[ 288], 00:25:28.020 | 99.00th=[ 1020], 99.50th=[ 1099], 99.90th=[ 1133], 99.95th=[ 1133], 00:25:28.020 | 99.99th=[ 1133] 00:25:28.020 bw ( KiB/s): min=41984, max=206848, per=8.44%, avg=97542.95, stdev=44296.33, samples=20 00:25:28.020 iops : min= 164, max= 808, avg=381.00, stdev=173.05, samples=20 00:25:28.020 lat (msec) : 4=0.05%, 10=0.54%, 20=1.26%, 50=4.34%, 100=24.94% 00:25:28.020 lat (msec) : 250=60.22%, 500=5.60%, 750=1.29%, 1000=0.62%, 2000=1.14% 00:25:28.020 cpu : usr=1.14%, sys=1.40%, ctx=1981, majf=0, minf=1 00:25:28.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:28.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.020 issued rwts: total=0,3874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.020 job6: (groupid=0, jobs=1): err= 0: pid=2852879: Sun Jul 14 04:41:47 2024 00:25:28.020 write: IOPS=378, BW=94.6MiB/s (99.2MB/s)(960MiB/10146msec); 0 zone resets 00:25:28.020 slat (usec): min=22, max=154828, avg=1979.56, stdev=7195.49 00:25:28.020 clat (msec): min=4, max=1033, avg=166.98, stdev=123.45 00:25:28.020 lat (msec): min=4, max=1033, avg=168.96, stdev=124.82 00:25:28.020 clat percentiles (msec): 00:25:28.020 | 1.00th=[ 16], 5.00th=[ 38], 10.00th=[ 63], 20.00th=[ 86], 00:25:28.020 | 30.00th=[ 109], 40.00th=[ 138], 50.00th=[ 165], 60.00th=[ 180], 00:25:28.020 | 70.00th=[ 192], 80.00th=[ 209], 90.00th=[ 249], 95.00th=[ 292], 00:25:28.020 | 99.00th=[ 953], 99.50th=[ 1028], 99.90th=[ 1036], 99.95th=[ 1036], 00:25:28.020 | 99.99th=[ 1036] 00:25:28.020 bw ( KiB/s): min= 8192, max=204288, per=8.37%, avg=96713.05, stdev=41408.20, samples=20 00:25:28.020 iops : min= 32, max= 798, avg=377.75, stdev=161.77, samples=20 00:25:28.020 lat (msec) : 10=0.49%, 20=0.99%, 50=5.70%, 100=18.64%, 250=64.44% 00:25:28.020 lat (msec) : 500=7.89%, 750=0.42%, 1000=0.55%, 2000=0.89% 00:25:28.020 cpu : usr=1.08%, sys=1.25%, ctx=2029, majf=0, minf=1 00:25:28.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:28.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.020 issued rwts: total=0,3841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.020 job7: (groupid=0, jobs=1): err= 0: pid=2852886: Sun Jul 14 04:41:47 2024 00:25:28.020 write: IOPS=371, BW=93.0MiB/s (97.5MB/s)(948MiB/10191msec); 0 zone resets 00:25:28.020 slat (usec): min=19, max=565360, avg=2095.52, stdev=12119.78 00:25:28.020 clat (msec): min=2, max=886, avg=169.87, stdev=127.00 00:25:28.020 lat (msec): min=3, max=886, avg=171.97, stdev=128.16 00:25:28.020 clat percentiles (msec): 00:25:28.020 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 56], 20.00th=[ 86], 00:25:28.020 | 30.00th=[ 128], 40.00th=[ 146], 50.00th=[ 157], 60.00th=[ 167], 00:25:28.020 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 257], 95.00th=[ 368], 00:25:28.020 | 99.00th=[ 760], 99.50th=[ 818], 99.90th=[ 860], 99.95th=[ 885], 00:25:28.020 | 99.99th=[ 885] 00:25:28.020 bw ( KiB/s): min= 2048, max=217088, per=8.25%, avg=95376.00, stdev=46634.74, samples=20 00:25:28.020 iops : min= 8, max= 848, avg=372.50, stdev=182.20, samples=20 00:25:28.020 lat (msec) : 4=0.08%, 10=0.63%, 20=2.08%, 50=6.25%, 100=15.20% 00:25:28.020 lat (msec) : 250=64.59%, 500=7.84%, 750=2.14%, 1000=1.19% 00:25:28.020 cpu : usr=1.15%, sys=1.13%, ctx=1920, majf=0, minf=1 00:25:28.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:25:28.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.020 issued rwts: total=0,3790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.020 job8: (groupid=0, jobs=1): err= 0: pid=2852887: Sun Jul 14 04:41:47 2024 00:25:28.020 write: IOPS=416, BW=104MiB/s (109MB/s)(1051MiB/10101msec); 0 zone resets 00:25:28.020 slat (usec): min=18, max=98510, avg=1369.79, stdev=5136.94 00:25:28.020 clat (usec): min=1920, max=761791, avg=152291.47, stdev=102724.81 00:25:28.020 lat (usec): min=1976, max=770694, avg=153661.26, stdev=103636.35 00:25:28.020 clat percentiles (msec): 00:25:28.020 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 32], 20.00th=[ 61], 00:25:28.020 | 30.00th=[ 92], 40.00th=[ 117], 50.00th=[ 157], 60.00th=[ 178], 00:25:28.020 | 70.00th=[ 194], 80.00th=[ 213], 90.00th=[ 264], 95.00th=[ 300], 00:25:28.020 | 99.00th=[ 609], 99.50th=[ 701], 99.90th=[ 751], 99.95th=[ 760], 00:25:28.020 | 99.99th=[ 760] 00:25:28.020 bw ( KiB/s): min=34816, max=194560, per=9.17%, avg=106012.95, stdev=38517.86, samples=20 00:25:28.020 iops : min= 136, max= 760, avg=414.10, stdev=150.46, samples=20 00:25:28.020 lat (msec) : 2=0.02%, 4=0.17%, 10=1.28%, 20=3.76%, 50=12.06% 00:25:28.020 lat (msec) : 100=16.31%, 250=54.82%, 500=10.25%, 750=1.17%, 1000=0.17% 00:25:28.020 cpu : usr=1.27%, sys=1.23%, ctx=2762, majf=0, minf=1 00:25:28.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:28.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.020 issued rwts: total=0,4205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.020 job9: (groupid=0, jobs=1): err= 0: pid=2852890: Sun Jul 14 04:41:47 2024 00:25:28.020 write: IOPS=521, BW=130MiB/s (137MB/s)(1323MiB/10157msec); 0 zone resets 00:25:28.020 slat (usec): min=25, max=421535, avg=1734.94, stdev=7557.15 00:25:28.020 clat (msec): min=2, max=1168, avg=120.97, stdev=107.45 00:25:28.020 lat (msec): min=4, max=1168, avg=122.70, stdev=107.85 00:25:28.020 clat percentiles (msec): 00:25:28.020 | 1.00th=[ 21], 5.00th=[ 45], 10.00th=[ 64], 20.00th=[ 70], 00:25:28.020 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 94], 60.00th=[ 106], 00:25:28.020 | 70.00th=[ 144], 80.00th=[ 167], 90.00th=[ 190], 95.00th=[ 213], 00:25:28.020 | 99.00th=[ 609], 99.50th=[ 1083], 99.90th=[ 1167], 99.95th=[ 1167], 00:25:28.020 | 99.99th=[ 1167] 00:25:28.020 bw ( KiB/s): min= 4096, max=231424, per=11.58%, avg=133862.95, stdev=59154.36, samples=20 00:25:28.020 iops : min= 16, max= 904, avg=522.80, stdev=231.06, samples=20 00:25:28.020 lat (msec) : 4=0.04%, 10=0.21%, 20=0.72%, 50=4.86%, 100=49.23% 00:25:28.020 lat (msec) : 250=42.89%, 500=0.89%, 750=0.40%, 2000=0.77% 00:25:28.020 cpu : usr=1.69%, sys=1.77%, ctx=1606, majf=0, minf=1 00:25:28.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:28.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.020 issued rwts: total=0,5293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.020 job10: (groupid=0, jobs=1): err= 0: pid=2852891: Sun Jul 14 04:41:47 2024 00:25:28.020 write: IOPS=432, BW=108MiB/s (113MB/s)(1100MiB/10168msec); 0 zone resets 00:25:28.020 slat (usec): min=18, max=526264, avg=1486.26, stdev=9961.01 00:25:28.020 clat (usec): min=1744, max=820551, avg=146329.56, stdev=126611.55 00:25:28.020 lat (usec): min=1776, max=822150, avg=147815.82, stdev=127750.94 00:25:28.020 clat percentiles (msec): 00:25:28.020 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 34], 20.00th=[ 64], 00:25:28.020 | 30.00th=[ 86], 40.00th=[ 105], 50.00th=[ 131], 60.00th=[ 148], 00:25:28.020 | 70.00th=[ 161], 80.00th=[ 188], 90.00th=[ 243], 95.00th=[ 347], 00:25:28.020 | 99.00th=[ 760], 99.50th=[ 810], 99.90th=[ 818], 99.95th=[ 818], 00:25:28.020 | 99.99th=[ 818] 00:25:28.020 bw ( KiB/s): min= 6144, max=193024, per=9.60%, avg=111000.70, stdev=51641.37, samples=20 00:25:28.020 iops : min= 24, max= 754, avg=433.50, stdev=201.68, samples=20 00:25:28.020 lat (msec) : 2=0.02%, 4=0.30%, 10=1.84%, 20=3.52%, 50=11.27% 00:25:28.020 lat (msec) : 100=21.04%, 250=52.83%, 500=5.70%, 750=2.41%, 1000=1.07% 00:25:28.020 cpu : usr=1.27%, sys=1.53%, ctx=2821, majf=0, minf=1 00:25:28.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:28.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:28.020 issued rwts: total=0,4401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:28.020 00:25:28.021 Run status group 0 (all jobs): 00:25:28.021 WRITE: bw=1129MiB/s (1184MB/s), 77.9MiB/s-130MiB/s (81.7MB/s-137MB/s), io=11.2GiB (12.1GB), run=10098-10191msec 00:25:28.021 00:25:28.021 Disk stats (read/write): 00:25:28.021 nvme0n1: ios=49/8688, merge=0/0, ticks=6234/1106351, in_queue=1112585, util=99.02% 00:25:28.021 nvme10n1: ios=44/6146, merge=0/0, ticks=1665/1202020, in_queue=1203685, util=99.26% 00:25:28.021 nvme1n1: ios=49/8393, merge=0/0, ticks=89/1209366, in_queue=1209455, util=98.10% 00:25:28.021 nvme2n1: ios=52/7890, merge=0/0, ticks=2176/1215590, in_queue=1217766, util=99.62% 00:25:28.021 nvme3n1: ios=46/9351, merge=0/0, ticks=1821/1229959, in_queue=1231780, util=99.75% 00:25:28.021 nvme4n1: ios=40/7569, merge=0/0, ticks=38/1215510, in_queue=1215548, util=98.23% 00:25:28.021 nvme5n1: ios=45/7545, merge=0/0, ticks=1860/1210390, in_queue=1212250, util=99.97% 00:25:28.021 nvme6n1: ios=47/7528, merge=0/0, ticks=2048/1223227, in_queue=1225275, util=100.00% 00:25:28.021 nvme7n1: ios=45/8169, merge=0/0, ticks=804/1221065, in_queue=1221869, util=100.00% 00:25:28.021 nvme8n1: ios=50/10390, merge=0/0, ticks=4803/1135116, in_queue=1139919, util=100.00% 00:25:28.021 nvme9n1: ios=0/8800, merge=0/0, ticks=0/1258179, in_queue=1258179, util=99.14% 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:28.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.021 04:41:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:28.021 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.021 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:28.280 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.280 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:28.539 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.539 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:28.799 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.799 04:41:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:29.057 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:29.057 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:29.057 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.058 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:29.323 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:29.323 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.323 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:29.592 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.592 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:29.854 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:29.854 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.854 rmmod nvme_tcp 00:25:29.854 rmmod nvme_fabrics 00:25:29.854 rmmod nvme_keyring 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2847437 ']' 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2847437 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 2847437 ']' 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 2847437 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2847437 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2847437' 00:25:29.854 killing process with pid 2847437 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 2847437 00:25:29.854 04:41:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 2847437 00:25:30.420 04:41:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.420 04:41:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.420 04:41:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.420 04:41:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.420 04:41:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.420 04:41:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.420 04:41:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.420 04:41:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.325 04:41:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:32.325 00:25:32.325 real 1m0.605s 00:25:32.325 user 3m19.120s 00:25:32.325 sys 0m23.782s 00:25:32.325 04:41:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:32.325 04:41:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.325 ************************************ 00:25:32.325 END TEST nvmf_multiconnection 00:25:32.584 ************************************ 00:25:32.584 04:41:52 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:32.584 04:41:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:32.584 04:41:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:32.584 04:41:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:32.584 ************************************ 00:25:32.584 START TEST nvmf_initiator_timeout 00:25:32.584 ************************************ 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:32.584 * Looking for test storage... 00:25:32.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:32.584 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.585 04:41:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:34.486 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:34.486 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.486 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:34.487 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:34.487 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:34.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:25:34.487 00:25:34.487 --- 10.0.0.2 ping statistics --- 00:25:34.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.487 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:25:34.487 00:25:34.487 --- 10.0.0.1 ping statistics --- 00:25:34.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.487 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:34.487 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2856212 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2856212 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 2856212 ']' 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:34.746 04:41:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.746 [2024-07-14 04:41:54.739724] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:34.746 [2024-07-14 04:41:54.739792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.746 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.746 [2024-07-14 04:41:54.809728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.746 [2024-07-14 04:41:54.906343] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.746 [2024-07-14 04:41:54.906421] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.746 [2024-07-14 04:41:54.906434] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.746 [2024-07-14 04:41:54.906461] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.746 [2024-07-14 04:41:54.906473] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.746 [2024-07-14 04:41:54.906534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.746 [2024-07-14 04:41:54.906570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.746 [2024-07-14 04:41:54.906693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.746 [2024-07-14 04:41:54.906695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.005 Malloc0 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:35.005 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.006 Delay0 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.006 [2024-07-14 04:41:55.085781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.006 [2024-07-14 04:41:55.114093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.006 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:35.572 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:35.572 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:35.572 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.572 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:35.572 04:41:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:38.110 04:41:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:38.110 04:41:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:38.110 04:41:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:38.110 04:41:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:38.110 04:41:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.110 04:41:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:38.110 04:41:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2856541 00:25:38.110 04:41:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:38.110 04:41:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:38.110 [global] 00:25:38.110 thread=1 00:25:38.110 invalidate=1 00:25:38.110 rw=write 00:25:38.110 time_based=1 00:25:38.110 runtime=60 00:25:38.110 ioengine=libaio 00:25:38.110 direct=1 00:25:38.110 bs=4096 00:25:38.110 iodepth=1 00:25:38.110 norandommap=0 00:25:38.110 numjobs=1 00:25:38.110 00:25:38.110 verify_dump=1 00:25:38.110 verify_backlog=512 00:25:38.110 verify_state_save=0 00:25:38.110 do_verify=1 00:25:38.110 verify=crc32c-intel 00:25:38.110 [job0] 00:25:38.110 filename=/dev/nvme0n1 00:25:38.110 Could not set queue depth (nvme0n1) 00:25:38.110 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:38.110 fio-3.35 00:25:38.110 Starting 1 thread 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.644 true 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.644 true 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.644 true 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.644 true 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.644 04:42:00 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.945 true 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.945 true 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.945 true 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.945 true 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:43.945 04:42:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2856541 00:26:40.247 00:26:40.247 job0: (groupid=0, jobs=1): err= 0: pid=2856710: Sun Jul 14 04:42:58 2024 00:26:40.247 read: IOPS=7, BW=30.6KiB/s (31.3kB/s)(1836KiB/60021msec) 00:26:40.247 slat (usec): min=11, max=11445, avg=59.71, stdev=587.37 00:26:40.247 clat (usec): min=546, max=41107k, avg=130182.83, stdev=1916839.47 00:26:40.247 lat (usec): min=566, max=41108k, avg=130242.54, stdev=1916837.52 00:26:40.247 clat percentiles (usec): 00:26:40.247 | 1.00th=[ 570], 5.00th=[ 41157], 10.00th=[ 41157], 00:26:40.247 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:26:40.247 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41681], 00:26:40.247 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:26:40.247 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:26:40.247 | 99.95th=[17112761], 99.99th=[17112761] 00:26:40.247 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60021msec); 0 zone resets 00:26:40.247 slat (usec): min=10, max=29213, avg=85.74, stdev=1289.85 00:26:40.247 clat (usec): min=271, max=625, avg=368.17, stdev=53.78 00:26:40.247 lat (usec): min=285, max=29677, avg=453.91, stdev=1295.37 00:26:40.247 clat percentiles (usec): 00:26:40.247 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 326], 00:26:40.247 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 379], 00:26:40.247 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 433], 95.00th=[ 461], 00:26:40.247 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 627], 99.95th=[ 627], 00:26:40.247 | 99.99th=[ 627] 00:26:40.247 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:26:40.247 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:40.247 lat (usec) : 500=51.70%, 750=1.75% 00:26:40.247 lat (msec) : 50=46.45%, >=2000=0.10% 00:26:40.247 cpu : usr=0.03%, sys=0.05%, ctx=976, majf=0, minf=2 00:26:40.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.247 issued rwts: total=459,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:40.247 00:26:40.247 Run status group 0 (all jobs): 00:26:40.247 READ: bw=30.6KiB/s (31.3kB/s), 30.6KiB/s-30.6KiB/s (31.3kB/s-31.3kB/s), io=1836KiB (1880kB), run=60021-60021msec 00:26:40.247 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60021-60021msec 00:26:40.247 00:26:40.247 Disk stats (read/write): 00:26:40.247 nvme0n1: ios=508/512, merge=0/0, ticks=19365/181, in_queue=19546, util=99.74% 00:26:40.247 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:40.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:40.247 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:40.247 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:40.247 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:40.247 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:40.247 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:40.248 nvmf hotplug test: fio successful as expected 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:40.248 rmmod nvme_tcp 00:26:40.248 rmmod nvme_fabrics 00:26:40.248 rmmod nvme_keyring 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2856212 ']' 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2856212 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 2856212 ']' 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 2856212 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2856212 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2856212' 00:26:40.248 killing process with pid 2856212 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 2856212 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 2856212 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.248 04:42:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.507 04:43:00 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.507 00:26:40.507 real 1m8.076s 00:26:40.507 user 4m11.138s 00:26:40.507 sys 0m6.062s 00:26:40.507 04:43:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:40.507 04:43:00 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:40.507 ************************************ 00:26:40.507 END TEST nvmf_initiator_timeout 00:26:40.507 ************************************ 00:26:40.507 04:43:00 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:40.507 04:43:00 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:40.507 04:43:00 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:40.507 04:43:00 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.507 04:43:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.407 04:43:02 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:42.408 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:42.408 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:42.408 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:42.408 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:42.408 04:43:02 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:42.408 04:43:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:42.408 04:43:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:42.408 04:43:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:42.408 ************************************ 00:26:42.408 START TEST nvmf_perf_adq 00:26:42.408 ************************************ 00:26:42.408 04:43:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:42.666 * Looking for test storage... 00:26:42.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:42.666 04:43:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:44.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:44.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.570 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:44.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:44.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:44.571 04:43:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:45.137 04:43:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:47.044 04:43:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:52.318 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:52.318 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:52.318 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.318 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:52.318 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:52.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:52.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:52.319 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:52.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.319 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:52.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:26:52.320 00:26:52.320 --- 10.0.0.2 ping statistics --- 00:26:52.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.320 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:26:52.320 00:26:52.320 --- 10.0.0.1 ping statistics --- 00:26:52.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.320 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2868835 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2868835 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2868835 ']' 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:52.320 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.320 [2024-07-14 04:43:12.351150] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:52.320 [2024-07-14 04:43:12.351247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.320 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.320 [2024-07-14 04:43:12.413949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.320 [2024-07-14 04:43:12.498709] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.320 [2024-07-14 04:43:12.498761] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.320 [2024-07-14 04:43:12.498789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.320 [2024-07-14 04:43:12.498800] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.320 [2024-07-14 04:43:12.498810] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.320 [2024-07-14 04:43:12.498955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.320 [2024-07-14 04:43:12.499017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.320 [2024-07-14 04:43:12.498986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.320 [2024-07-14 04:43:12.499015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.577 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:52.577 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:52.577 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:52.577 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:52.577 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.577 04:43:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.577 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:52.577 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:52.577 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.578 [2024-07-14 04:43:12.726412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.578 Malloc1 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.578 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.837 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.837 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.837 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.837 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.837 [2024-07-14 04:43:12.777405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.837 04:43:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.837 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2868861 00:26:52.837 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:52.837 04:43:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:52.837 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:54.738 "tick_rate": 2700000000, 00:26:54.738 "poll_groups": [ 00:26:54.738 { 00:26:54.738 "name": "nvmf_tgt_poll_group_000", 00:26:54.738 "admin_qpairs": 1, 00:26:54.738 "io_qpairs": 1, 00:26:54.738 "current_admin_qpairs": 1, 00:26:54.738 "current_io_qpairs": 1, 00:26:54.738 "pending_bdev_io": 0, 00:26:54.738 "completed_nvme_io": 19014, 00:26:54.738 "transports": [ 00:26:54.738 { 00:26:54.738 "trtype": "TCP" 00:26:54.738 } 00:26:54.738 ] 00:26:54.738 }, 00:26:54.738 { 00:26:54.738 "name": "nvmf_tgt_poll_group_001", 00:26:54.738 "admin_qpairs": 0, 00:26:54.738 "io_qpairs": 1, 00:26:54.738 "current_admin_qpairs": 0, 00:26:54.738 "current_io_qpairs": 1, 00:26:54.738 "pending_bdev_io": 0, 00:26:54.738 "completed_nvme_io": 19833, 00:26:54.738 "transports": [ 00:26:54.738 { 00:26:54.738 "trtype": "TCP" 00:26:54.738 } 00:26:54.738 ] 00:26:54.738 }, 00:26:54.738 { 00:26:54.738 "name": "nvmf_tgt_poll_group_002", 00:26:54.738 "admin_qpairs": 0, 00:26:54.738 "io_qpairs": 1, 00:26:54.738 "current_admin_qpairs": 0, 00:26:54.738 "current_io_qpairs": 1, 00:26:54.738 "pending_bdev_io": 0, 00:26:54.738 "completed_nvme_io": 20764, 00:26:54.738 "transports": [ 00:26:54.738 { 00:26:54.738 "trtype": "TCP" 00:26:54.738 } 00:26:54.738 ] 00:26:54.738 }, 00:26:54.738 { 00:26:54.738 "name": "nvmf_tgt_poll_group_003", 00:26:54.738 "admin_qpairs": 0, 00:26:54.738 "io_qpairs": 1, 00:26:54.738 "current_admin_qpairs": 0, 00:26:54.738 "current_io_qpairs": 1, 00:26:54.738 "pending_bdev_io": 0, 00:26:54.738 "completed_nvme_io": 19627, 00:26:54.738 "transports": [ 00:26:54.738 { 00:26:54.738 "trtype": "TCP" 00:26:54.738 } 00:26:54.738 ] 00:26:54.738 } 00:26:54.738 ] 00:26:54.738 }' 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:54.738 04:43:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2868861 00:27:02.894 Initializing NVMe Controllers 00:27:02.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:02.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:02.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:02.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:02.894 Initialization complete. Launching workers. 00:27:02.894 ======================================================== 00:27:02.894 Latency(us) 00:27:02.894 Device Information : IOPS MiB/s Average min max 00:27:02.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10205.40 39.86 6273.12 2210.09 8812.86 00:27:02.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10435.30 40.76 6134.34 2067.25 9193.51 00:27:02.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10816.90 42.25 5917.76 2259.67 8701.54 00:27:02.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9983.70 39.00 6409.92 1884.81 10883.71 00:27:02.894 ======================================================== 00:27:02.894 Total : 41441.29 161.88 6178.37 1884.81 10883.71 00:27:02.894 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.894 rmmod nvme_tcp 00:27:02.894 rmmod nvme_fabrics 00:27:02.894 rmmod nvme_keyring 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2868835 ']' 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2868835 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2868835 ']' 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2868835 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:02.894 04:43:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2868835 00:27:02.894 04:43:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:02.894 04:43:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:02.894 04:43:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2868835' 00:27:02.894 killing process with pid 2868835 00:27:02.894 04:43:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2868835 00:27:02.894 04:43:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2868835 00:27:03.152 04:43:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.152 04:43:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.152 04:43:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.152 04:43:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.152 04:43:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.152 04:43:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.152 04:43:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.152 04:43:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.689 04:43:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.689 04:43:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:05.689 04:43:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:05.947 04:43:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:07.849 04:43:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:13.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:13.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:13.119 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:13.119 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.119 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.120 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.120 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.120 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.120 04:43:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:27:13.120 00:27:13.120 --- 10.0.0.2 ping statistics --- 00:27:13.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.120 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:27:13.120 00:27:13.120 --- 10.0.0.1 ping statistics --- 00:27:13.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.120 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:13.120 net.core.busy_poll = 1 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:13.120 net.core.busy_read = 1 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2871491 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2871491 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2871491 ']' 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:13.120 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.120 [2024-07-14 04:43:33.234175] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:13.120 [2024-07-14 04:43:33.234291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.120 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.120 [2024-07-14 04:43:33.300774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.377 [2024-07-14 04:43:33.391648] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.377 [2024-07-14 04:43:33.391723] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.377 [2024-07-14 04:43:33.391736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.377 [2024-07-14 04:43:33.391747] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.377 [2024-07-14 04:43:33.391770] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.377 [2024-07-14 04:43:33.391821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.377 [2024-07-14 04:43:33.391887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.377 [2024-07-14 04:43:33.391946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.377 [2024-07-14 04:43:33.391948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.377 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.633 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.634 [2024-07-14 04:43:33.611429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.634 Malloc1 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.634 [2024-07-14 04:43:33.662933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2871559 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:13.634 04:43:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:13.634 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.530 04:43:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:15.530 04:43:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.530 04:43:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.530 04:43:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.530 04:43:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:15.530 "tick_rate": 2700000000, 00:27:15.530 "poll_groups": [ 00:27:15.530 { 00:27:15.530 "name": "nvmf_tgt_poll_group_000", 00:27:15.530 "admin_qpairs": 1, 00:27:15.530 "io_qpairs": 1, 00:27:15.530 "current_admin_qpairs": 1, 00:27:15.530 "current_io_qpairs": 1, 00:27:15.530 "pending_bdev_io": 0, 00:27:15.530 "completed_nvme_io": 24398, 00:27:15.530 "transports": [ 00:27:15.530 { 00:27:15.530 "trtype": "TCP" 00:27:15.530 } 00:27:15.530 ] 00:27:15.530 }, 00:27:15.530 { 00:27:15.530 "name": "nvmf_tgt_poll_group_001", 00:27:15.530 "admin_qpairs": 0, 00:27:15.530 "io_qpairs": 3, 00:27:15.530 "current_admin_qpairs": 0, 00:27:15.530 "current_io_qpairs": 3, 00:27:15.530 "pending_bdev_io": 0, 00:27:15.530 "completed_nvme_io": 27604, 00:27:15.530 "transports": [ 00:27:15.530 { 00:27:15.530 "trtype": "TCP" 00:27:15.530 } 00:27:15.530 ] 00:27:15.530 }, 00:27:15.530 { 00:27:15.530 "name": "nvmf_tgt_poll_group_002", 00:27:15.530 "admin_qpairs": 0, 00:27:15.530 "io_qpairs": 0, 00:27:15.530 "current_admin_qpairs": 0, 00:27:15.530 "current_io_qpairs": 0, 00:27:15.530 "pending_bdev_io": 0, 00:27:15.530 "completed_nvme_io": 0, 00:27:15.530 "transports": [ 00:27:15.530 { 00:27:15.530 "trtype": "TCP" 00:27:15.530 } 00:27:15.530 ] 00:27:15.530 }, 00:27:15.530 { 00:27:15.530 "name": "nvmf_tgt_poll_group_003", 00:27:15.530 "admin_qpairs": 0, 00:27:15.530 "io_qpairs": 0, 00:27:15.530 "current_admin_qpairs": 0, 00:27:15.530 "current_io_qpairs": 0, 00:27:15.530 "pending_bdev_io": 0, 00:27:15.530 "completed_nvme_io": 0, 00:27:15.530 "transports": [ 00:27:15.530 { 00:27:15.530 "trtype": "TCP" 00:27:15.530 } 00:27:15.530 ] 00:27:15.530 } 00:27:15.530 ] 00:27:15.530 }' 00:27:15.530 04:43:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:15.530 04:43:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:15.786 04:43:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:15.786 04:43:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:15.786 04:43:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2871559 00:27:23.891 Initializing NVMe Controllers 00:27:23.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:23.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:23.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:23.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:23.891 Initialization complete. Launching workers. 00:27:23.891 ======================================================== 00:27:23.891 Latency(us) 00:27:23.891 Device Information : IOPS MiB/s Average min max 00:27:23.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4815.30 18.81 13291.48 2236.86 62254.85 00:27:23.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12641.70 49.38 5070.70 1803.47 44657.60 00:27:23.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4407.10 17.22 14576.93 2392.70 61326.15 00:27:23.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4921.80 19.23 13036.71 2056.37 62682.02 00:27:23.891 ======================================================== 00:27:23.891 Total : 26785.90 104.63 9576.34 1803.47 62682.02 00:27:23.891 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.891 rmmod nvme_tcp 00:27:23.891 rmmod nvme_fabrics 00:27:23.891 rmmod nvme_keyring 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2871491 ']' 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2871491 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2871491 ']' 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2871491 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2871491 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2871491' 00:27:23.891 killing process with pid 2871491 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2871491 00:27:23.891 04:43:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2871491 00:27:24.149 04:43:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:24.149 04:43:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:24.149 04:43:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:24.149 04:43:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:24.149 04:43:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:24.149 04:43:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.149 04:43:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.149 04:43:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.491 04:43:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.491 04:43:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:27.491 00:27:27.491 real 0m44.608s 00:27:27.491 user 2m36.467s 00:27:27.491 sys 0m10.502s 00:27:27.491 04:43:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:27.491 04:43:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.491 ************************************ 00:27:27.491 END TEST nvmf_perf_adq 00:27:27.491 ************************************ 00:27:27.491 04:43:47 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:27.491 04:43:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:27.491 04:43:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:27.491 04:43:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.491 ************************************ 00:27:27.491 START TEST nvmf_shutdown 00:27:27.491 ************************************ 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:27.491 * Looking for test storage... 00:27:27.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.491 04:43:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:27.492 ************************************ 00:27:27.492 START TEST nvmf_shutdown_tc1 00:27:27.492 ************************************ 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.492 04:43:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:29.395 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:29.395 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:29.395 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:29.395 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:29.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:27:29.395 00:27:29.395 --- 10.0.0.2 ping statistics --- 00:27:29.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.395 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:27:29.395 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:27:29.396 00:27:29.396 --- 10.0.0.1 ping statistics --- 00:27:29.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.396 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2874809 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2874809 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2874809 ']' 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:29.396 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.396 [2024-07-14 04:43:49.460717] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:29.396 [2024-07-14 04:43:49.460800] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.396 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.396 [2024-07-14 04:43:49.524359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.654 [2024-07-14 04:43:49.615337] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.654 [2024-07-14 04:43:49.615400] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.654 [2024-07-14 04:43:49.615420] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.654 [2024-07-14 04:43:49.615446] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.654 [2024-07-14 04:43:49.615486] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.654 [2024-07-14 04:43:49.615603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.654 [2024-07-14 04:43:49.615666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.654 [2024-07-14 04:43:49.615694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:29.654 [2024-07-14 04:43:49.615699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.654 [2024-07-14 04:43:49.772714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.654 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.655 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.655 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:29.655 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.655 04:43:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.655 Malloc1 00:27:29.913 [2024-07-14 04:43:49.862266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.913 Malloc2 00:27:29.913 Malloc3 00:27:29.913 Malloc4 00:27:29.913 Malloc5 00:27:29.913 Malloc6 00:27:30.171 Malloc7 00:27:30.171 Malloc8 00:27:30.171 Malloc9 00:27:30.171 Malloc10 00:27:30.171 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.171 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2874981 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2874981 /var/tmp/bdevperf.sock 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2874981 ']' 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:30.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.172 { 00:27:30.172 "params": { 00:27:30.172 "name": "Nvme$subsystem", 00:27:30.172 "trtype": "$TEST_TRANSPORT", 00:27:30.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.172 "adrfam": "ipv4", 00:27:30.172 "trsvcid": "$NVMF_PORT", 00:27:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.172 "hdgst": ${hdgst:-false}, 00:27:30.172 "ddgst": ${ddgst:-false} 00:27:30.172 }, 00:27:30.172 "method": "bdev_nvme_attach_controller" 00:27:30.172 } 00:27:30.172 EOF 00:27:30.172 )") 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.172 { 00:27:30.172 "params": { 00:27:30.172 "name": "Nvme$subsystem", 00:27:30.172 "trtype": "$TEST_TRANSPORT", 00:27:30.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.172 "adrfam": "ipv4", 00:27:30.172 "trsvcid": "$NVMF_PORT", 00:27:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.172 "hdgst": ${hdgst:-false}, 00:27:30.172 "ddgst": ${ddgst:-false} 00:27:30.172 }, 00:27:30.172 "method": "bdev_nvme_attach_controller" 00:27:30.172 } 00:27:30.172 EOF 00:27:30.172 )") 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.172 { 00:27:30.172 "params": { 00:27:30.172 "name": "Nvme$subsystem", 00:27:30.172 "trtype": "$TEST_TRANSPORT", 00:27:30.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.172 "adrfam": "ipv4", 00:27:30.172 "trsvcid": "$NVMF_PORT", 00:27:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.172 "hdgst": ${hdgst:-false}, 00:27:30.172 "ddgst": ${ddgst:-false} 00:27:30.172 }, 00:27:30.172 "method": "bdev_nvme_attach_controller" 00:27:30.172 } 00:27:30.172 EOF 00:27:30.172 )") 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.172 { 00:27:30.172 "params": { 00:27:30.172 "name": "Nvme$subsystem", 00:27:30.172 "trtype": "$TEST_TRANSPORT", 00:27:30.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.172 "adrfam": "ipv4", 00:27:30.172 "trsvcid": "$NVMF_PORT", 00:27:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.172 "hdgst": ${hdgst:-false}, 00:27:30.172 "ddgst": ${ddgst:-false} 00:27:30.172 }, 00:27:30.172 "method": "bdev_nvme_attach_controller" 00:27:30.172 } 00:27:30.172 EOF 00:27:30.172 )") 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.172 { 00:27:30.172 "params": { 00:27:30.172 "name": "Nvme$subsystem", 00:27:30.172 "trtype": "$TEST_TRANSPORT", 00:27:30.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.172 "adrfam": "ipv4", 00:27:30.172 "trsvcid": "$NVMF_PORT", 00:27:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.172 "hdgst": ${hdgst:-false}, 00:27:30.172 "ddgst": ${ddgst:-false} 00:27:30.172 }, 00:27:30.172 "method": "bdev_nvme_attach_controller" 00:27:30.172 } 00:27:30.172 EOF 00:27:30.172 )") 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.172 { 00:27:30.172 "params": { 00:27:30.172 "name": "Nvme$subsystem", 00:27:30.172 "trtype": "$TEST_TRANSPORT", 00:27:30.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.172 "adrfam": "ipv4", 00:27:30.172 "trsvcid": "$NVMF_PORT", 00:27:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.172 "hdgst": ${hdgst:-false}, 00:27:30.172 "ddgst": ${ddgst:-false} 00:27:30.172 }, 00:27:30.172 "method": "bdev_nvme_attach_controller" 00:27:30.172 } 00:27:30.172 EOF 00:27:30.172 )") 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.172 { 00:27:30.172 "params": { 00:27:30.172 "name": "Nvme$subsystem", 00:27:30.172 "trtype": "$TEST_TRANSPORT", 00:27:30.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.172 "adrfam": "ipv4", 00:27:30.172 "trsvcid": "$NVMF_PORT", 00:27:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.172 "hdgst": ${hdgst:-false}, 00:27:30.172 "ddgst": ${ddgst:-false} 00:27:30.172 }, 00:27:30.172 "method": "bdev_nvme_attach_controller" 00:27:30.172 } 00:27:30.172 EOF 00:27:30.172 )") 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.172 { 00:27:30.172 "params": { 00:27:30.172 "name": "Nvme$subsystem", 00:27:30.172 "trtype": "$TEST_TRANSPORT", 00:27:30.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.172 "adrfam": "ipv4", 00:27:30.172 "trsvcid": "$NVMF_PORT", 00:27:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.172 "hdgst": ${hdgst:-false}, 00:27:30.172 "ddgst": ${ddgst:-false} 00:27:30.172 }, 00:27:30.172 "method": "bdev_nvme_attach_controller" 00:27:30.172 } 00:27:30.172 EOF 00:27:30.172 )") 00:27:30.172 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.430 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.430 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.430 { 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme$subsystem", 00:27:30.430 "trtype": "$TEST_TRANSPORT", 00:27:30.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "$NVMF_PORT", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.430 "hdgst": ${hdgst:-false}, 00:27:30.430 "ddgst": ${ddgst:-false} 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 } 00:27:30.430 EOF 00:27:30.430 )") 00:27:30.430 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.430 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.430 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.430 { 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme$subsystem", 00:27:30.430 "trtype": "$TEST_TRANSPORT", 00:27:30.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "$NVMF_PORT", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.430 "hdgst": ${hdgst:-false}, 00:27:30.430 "ddgst": ${ddgst:-false} 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 } 00:27:30.430 EOF 00:27:30.430 )") 00:27:30.430 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.430 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:30.430 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:30.430 04:43:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme1", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 },{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme2", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 },{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme3", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 },{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme4", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 },{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme5", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 },{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme6", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 },{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme7", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 },{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme8", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 },{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme9", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 },{ 00:27:30.430 "params": { 00:27:30.430 "name": "Nvme10", 00:27:30.430 "trtype": "tcp", 00:27:30.430 "traddr": "10.0.0.2", 00:27:30.430 "adrfam": "ipv4", 00:27:30.430 "trsvcid": "4420", 00:27:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:30.430 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:30.430 "hdgst": false, 00:27:30.430 "ddgst": false 00:27:30.430 }, 00:27:30.430 "method": "bdev_nvme_attach_controller" 00:27:30.430 }' 00:27:30.430 [2024-07-14 04:43:50.379143] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:30.430 [2024-07-14 04:43:50.379231] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:30.430 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.430 [2024-07-14 04:43:50.443102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.430 [2024-07-14 04:43:50.529411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.326 04:43:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.326 04:43:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:32.326 04:43:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:32.326 04:43:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.326 04:43:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.326 04:43:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.326 04:43:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2874981 00:27:32.326 04:43:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:32.326 04:43:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:33.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2874981 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2874809 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.260 { 00:27:33.260 "params": { 00:27:33.260 "name": "Nvme$subsystem", 00:27:33.260 "trtype": "$TEST_TRANSPORT", 00:27:33.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.260 "adrfam": "ipv4", 00:27:33.260 "trsvcid": "$NVMF_PORT", 00:27:33.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.260 "hdgst": ${hdgst:-false}, 00:27:33.260 "ddgst": ${ddgst:-false} 00:27:33.260 }, 00:27:33.260 "method": "bdev_nvme_attach_controller" 00:27:33.260 } 00:27:33.260 EOF 00:27:33.260 )") 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.260 { 00:27:33.260 "params": { 00:27:33.260 "name": "Nvme$subsystem", 00:27:33.260 "trtype": "$TEST_TRANSPORT", 00:27:33.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.260 "adrfam": "ipv4", 00:27:33.260 "trsvcid": "$NVMF_PORT", 00:27:33.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.260 "hdgst": ${hdgst:-false}, 00:27:33.260 "ddgst": ${ddgst:-false} 00:27:33.260 }, 00:27:33.260 "method": "bdev_nvme_attach_controller" 00:27:33.260 } 00:27:33.260 EOF 00:27:33.260 )") 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.260 { 00:27:33.260 "params": { 00:27:33.260 "name": "Nvme$subsystem", 00:27:33.260 "trtype": "$TEST_TRANSPORT", 00:27:33.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.260 "adrfam": "ipv4", 00:27:33.260 "trsvcid": "$NVMF_PORT", 00:27:33.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.260 "hdgst": ${hdgst:-false}, 00:27:33.260 "ddgst": ${ddgst:-false} 00:27:33.260 }, 00:27:33.260 "method": "bdev_nvme_attach_controller" 00:27:33.260 } 00:27:33.260 EOF 00:27:33.260 )") 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.260 { 00:27:33.260 "params": { 00:27:33.260 "name": "Nvme$subsystem", 00:27:33.260 "trtype": "$TEST_TRANSPORT", 00:27:33.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.260 "adrfam": "ipv4", 00:27:33.260 "trsvcid": "$NVMF_PORT", 00:27:33.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.260 "hdgst": ${hdgst:-false}, 00:27:33.260 "ddgst": ${ddgst:-false} 00:27:33.260 }, 00:27:33.260 "method": "bdev_nvme_attach_controller" 00:27:33.260 } 00:27:33.260 EOF 00:27:33.260 )") 00:27:33.260 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.261 { 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme$subsystem", 00:27:33.261 "trtype": "$TEST_TRANSPORT", 00:27:33.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "$NVMF_PORT", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.261 "hdgst": ${hdgst:-false}, 00:27:33.261 "ddgst": ${ddgst:-false} 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 } 00:27:33.261 EOF 00:27:33.261 )") 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.261 { 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme$subsystem", 00:27:33.261 "trtype": "$TEST_TRANSPORT", 00:27:33.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "$NVMF_PORT", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.261 "hdgst": ${hdgst:-false}, 00:27:33.261 "ddgst": ${ddgst:-false} 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 } 00:27:33.261 EOF 00:27:33.261 )") 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.261 { 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme$subsystem", 00:27:33.261 "trtype": "$TEST_TRANSPORT", 00:27:33.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "$NVMF_PORT", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.261 "hdgst": ${hdgst:-false}, 00:27:33.261 "ddgst": ${ddgst:-false} 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 } 00:27:33.261 EOF 00:27:33.261 )") 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.261 { 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme$subsystem", 00:27:33.261 "trtype": "$TEST_TRANSPORT", 00:27:33.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "$NVMF_PORT", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.261 "hdgst": ${hdgst:-false}, 00:27:33.261 "ddgst": ${ddgst:-false} 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 } 00:27:33.261 EOF 00:27:33.261 )") 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.261 { 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme$subsystem", 00:27:33.261 "trtype": "$TEST_TRANSPORT", 00:27:33.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "$NVMF_PORT", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.261 "hdgst": ${hdgst:-false}, 00:27:33.261 "ddgst": ${ddgst:-false} 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 } 00:27:33.261 EOF 00:27:33.261 )") 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.261 { 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme$subsystem", 00:27:33.261 "trtype": "$TEST_TRANSPORT", 00:27:33.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "$NVMF_PORT", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.261 "hdgst": ${hdgst:-false}, 00:27:33.261 "ddgst": ${ddgst:-false} 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 } 00:27:33.261 EOF 00:27:33.261 )") 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:33.261 04:43:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme1", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 },{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme2", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 },{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme3", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 },{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme4", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 },{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme5", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 },{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme6", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 },{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme7", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 },{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme8", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 },{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme9", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 },{ 00:27:33.261 "params": { 00:27:33.261 "name": "Nvme10", 00:27:33.261 "trtype": "tcp", 00:27:33.261 "traddr": "10.0.0.2", 00:27:33.261 "adrfam": "ipv4", 00:27:33.261 "trsvcid": "4420", 00:27:33.261 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:33.261 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:33.261 "hdgst": false, 00:27:33.261 "ddgst": false 00:27:33.261 }, 00:27:33.261 "method": "bdev_nvme_attach_controller" 00:27:33.261 }' 00:27:33.262 [2024-07-14 04:43:53.428066] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:33.262 [2024-07-14 04:43:53.428145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875397 ] 00:27:33.519 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.519 [2024-07-14 04:43:53.492589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.519 [2024-07-14 04:43:53.579195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.416 Running I/O for 1 seconds... 00:27:36.788 00:27:36.788 Latency(us) 00:27:36.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.788 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme1n1 : 1.06 180.93 11.31 0.00 0.00 349580.45 23204.60 292047.83 00:27:36.788 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme2n1 : 1.22 210.60 13.16 0.00 0.00 296107.80 29515.47 267192.70 00:27:36.788 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme3n1 : 1.18 271.68 16.98 0.00 0.00 225602.37 19709.35 240784.12 00:27:36.788 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme4n1 : 1.19 215.25 13.45 0.00 0.00 280443.64 22136.60 265639.25 00:27:36.788 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme5n1 : 1.22 262.18 16.39 0.00 0.00 227009.88 20971.52 267192.70 00:27:36.788 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme6n1 : 1.24 206.81 12.93 0.00 0.00 283563.05 23495.87 298261.62 00:27:36.788 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme7n1 : 1.21 265.26 16.58 0.00 0.00 216882.29 18155.90 270299.59 00:27:36.788 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme8n1 : 1.20 214.01 13.38 0.00 0.00 263886.32 42719.76 220589.32 00:27:36.788 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme9n1 : 1.24 206.21 12.89 0.00 0.00 270940.16 24855.13 310689.19 00:27:36.788 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.788 Verification LBA range: start 0x0 length 0x400 00:27:36.788 Nvme10n1 : 1.21 210.75 13.17 0.00 0.00 259546.26 24563.86 287387.50 00:27:36.788 =================================================================================================================== 00:27:36.788 Total : 2243.68 140.23 0.00 0.00 262241.98 18155.90 310689.19 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.788 rmmod nvme_tcp 00:27:36.788 rmmod nvme_fabrics 00:27:36.788 rmmod nvme_keyring 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2874809 ']' 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2874809 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 2874809 ']' 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 2874809 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2874809 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2874809' 00:27:36.788 killing process with pid 2874809 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 2874809 00:27:36.788 04:43:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 2874809 00:27:37.353 04:43:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:37.353 04:43:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:37.353 04:43:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:37.353 04:43:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.353 04:43:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:37.353 04:43:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.353 04:43:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.353 04:43:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:39.256 00:27:39.256 real 0m12.030s 00:27:39.256 user 0m35.205s 00:27:39.256 sys 0m3.350s 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 ************************************ 00:27:39.256 END TEST nvmf_shutdown_tc1 00:27:39.256 ************************************ 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 ************************************ 00:27:39.256 START TEST nvmf_shutdown_tc2 00:27:39.256 ************************************ 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.256 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:39.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:39.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:39.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:39.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.257 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.515 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:39.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:27:39.516 00:27:39.516 --- 10.0.0.2 ping statistics --- 00:27:39.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.516 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:27:39.516 00:27:39.516 --- 10.0.0.1 ping statistics --- 00:27:39.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.516 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2876176 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2876176 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2876176 ']' 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:39.516 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.516 [2024-07-14 04:43:59.657403] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:39.516 [2024-07-14 04:43:59.657483] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.516 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.774 [2024-07-14 04:43:59.734792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.774 [2024-07-14 04:43:59.826963] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.774 [2024-07-14 04:43:59.827019] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.774 [2024-07-14 04:43:59.827048] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.774 [2024-07-14 04:43:59.827059] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.774 [2024-07-14 04:43:59.827075] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.774 [2024-07-14 04:43:59.827128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.774 [2024-07-14 04:43:59.827190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.774 [2024-07-14 04:43:59.827256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:39.774 [2024-07-14 04:43:59.827260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.774 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:39.774 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:39.774 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.774 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.774 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.032 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.032 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.032 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.032 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.032 [2024-07-14 04:43:59.990756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.032 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.032 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:40.032 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:40.032 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:40.032 04:43:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.032 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.032 Malloc1 00:27:40.032 [2024-07-14 04:44:00.083941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.032 Malloc2 00:27:40.032 Malloc3 00:27:40.032 Malloc4 00:27:40.290 Malloc5 00:27:40.290 Malloc6 00:27:40.290 Malloc7 00:27:40.290 Malloc8 00:27:40.290 Malloc9 00:27:40.549 Malloc10 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2876358 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2876358 /var/tmp/bdevperf.sock 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2876358 ']' 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:40.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.549 { 00:27:40.549 "params": { 00:27:40.549 "name": "Nvme$subsystem", 00:27:40.549 "trtype": "$TEST_TRANSPORT", 00:27:40.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.549 "adrfam": "ipv4", 00:27:40.549 "trsvcid": "$NVMF_PORT", 00:27:40.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.549 "hdgst": ${hdgst:-false}, 00:27:40.549 "ddgst": ${ddgst:-false} 00:27:40.549 }, 00:27:40.549 "method": "bdev_nvme_attach_controller" 00:27:40.549 } 00:27:40.549 EOF 00:27:40.549 )") 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.549 { 00:27:40.549 "params": { 00:27:40.549 "name": "Nvme$subsystem", 00:27:40.549 "trtype": "$TEST_TRANSPORT", 00:27:40.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.549 "adrfam": "ipv4", 00:27:40.549 "trsvcid": "$NVMF_PORT", 00:27:40.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.549 "hdgst": ${hdgst:-false}, 00:27:40.549 "ddgst": ${ddgst:-false} 00:27:40.549 }, 00:27:40.549 "method": "bdev_nvme_attach_controller" 00:27:40.549 } 00:27:40.549 EOF 00:27:40.549 )") 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.549 { 00:27:40.549 "params": { 00:27:40.549 "name": "Nvme$subsystem", 00:27:40.549 "trtype": "$TEST_TRANSPORT", 00:27:40.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.549 "adrfam": "ipv4", 00:27:40.549 "trsvcid": "$NVMF_PORT", 00:27:40.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.549 "hdgst": ${hdgst:-false}, 00:27:40.549 "ddgst": ${ddgst:-false} 00:27:40.549 }, 00:27:40.549 "method": "bdev_nvme_attach_controller" 00:27:40.549 } 00:27:40.549 EOF 00:27:40.549 )") 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.549 { 00:27:40.549 "params": { 00:27:40.549 "name": "Nvme$subsystem", 00:27:40.549 "trtype": "$TEST_TRANSPORT", 00:27:40.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.549 "adrfam": "ipv4", 00:27:40.549 "trsvcid": "$NVMF_PORT", 00:27:40.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.549 "hdgst": ${hdgst:-false}, 00:27:40.549 "ddgst": ${ddgst:-false} 00:27:40.549 }, 00:27:40.549 "method": "bdev_nvme_attach_controller" 00:27:40.549 } 00:27:40.549 EOF 00:27:40.549 )") 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.549 { 00:27:40.549 "params": { 00:27:40.549 "name": "Nvme$subsystem", 00:27:40.549 "trtype": "$TEST_TRANSPORT", 00:27:40.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.549 "adrfam": "ipv4", 00:27:40.549 "trsvcid": "$NVMF_PORT", 00:27:40.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.549 "hdgst": ${hdgst:-false}, 00:27:40.549 "ddgst": ${ddgst:-false} 00:27:40.549 }, 00:27:40.549 "method": "bdev_nvme_attach_controller" 00:27:40.549 } 00:27:40.549 EOF 00:27:40.549 )") 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.549 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.549 { 00:27:40.549 "params": { 00:27:40.549 "name": "Nvme$subsystem", 00:27:40.549 "trtype": "$TEST_TRANSPORT", 00:27:40.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "$NVMF_PORT", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.550 "hdgst": ${hdgst:-false}, 00:27:40.550 "ddgst": ${ddgst:-false} 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 } 00:27:40.550 EOF 00:27:40.550 )") 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.550 { 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme$subsystem", 00:27:40.550 "trtype": "$TEST_TRANSPORT", 00:27:40.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "$NVMF_PORT", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.550 "hdgst": ${hdgst:-false}, 00:27:40.550 "ddgst": ${ddgst:-false} 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 } 00:27:40.550 EOF 00:27:40.550 )") 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.550 { 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme$subsystem", 00:27:40.550 "trtype": "$TEST_TRANSPORT", 00:27:40.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "$NVMF_PORT", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.550 "hdgst": ${hdgst:-false}, 00:27:40.550 "ddgst": ${ddgst:-false} 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 } 00:27:40.550 EOF 00:27:40.550 )") 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.550 { 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme$subsystem", 00:27:40.550 "trtype": "$TEST_TRANSPORT", 00:27:40.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "$NVMF_PORT", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.550 "hdgst": ${hdgst:-false}, 00:27:40.550 "ddgst": ${ddgst:-false} 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 } 00:27:40.550 EOF 00:27:40.550 )") 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.550 { 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme$subsystem", 00:27:40.550 "trtype": "$TEST_TRANSPORT", 00:27:40.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "$NVMF_PORT", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.550 "hdgst": ${hdgst:-false}, 00:27:40.550 "ddgst": ${ddgst:-false} 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 } 00:27:40.550 EOF 00:27:40.550 )") 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:40.550 04:44:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme1", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 },{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme2", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 },{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme3", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 },{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme4", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 },{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme5", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 },{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme6", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 },{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme7", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 },{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme8", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 },{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme9", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 },{ 00:27:40.550 "params": { 00:27:40.550 "name": "Nvme10", 00:27:40.550 "trtype": "tcp", 00:27:40.550 "traddr": "10.0.0.2", 00:27:40.550 "adrfam": "ipv4", 00:27:40.550 "trsvcid": "4420", 00:27:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:40.550 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:40.550 "hdgst": false, 00:27:40.550 "ddgst": false 00:27:40.550 }, 00:27:40.550 "method": "bdev_nvme_attach_controller" 00:27:40.550 }' 00:27:40.550 [2024-07-14 04:44:00.589512] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:40.550 [2024-07-14 04:44:00.589590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876358 ] 00:27:40.550 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.550 [2024-07-14 04:44:00.655183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.808 [2024-07-14 04:44:00.744372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.213 Running I/O for 10 seconds... 00:27:42.471 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:42.471 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:42.472 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:42.729 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:42.730 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:42.730 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:42.730 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:42.730 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.730 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.730 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.730 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:42.730 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:42.730 04:44:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:42.988 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:42.988 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:42.988 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:42.988 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:42.988 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.988 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2876358 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2876358 ']' 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2876358 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2876358 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2876358' 00:27:43.247 killing process with pid 2876358 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2876358 00:27:43.247 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2876358 00:27:43.247 Received shutdown signal, test time was about 1.057289 seconds 00:27:43.247 00:27:43.247 Latency(us) 00:27:43.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.247 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme1n1 : 1.04 184.68 11.54 0.00 0.00 341640.03 22039.51 371273.58 00:27:43.247 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme2n1 : 1.04 250.06 15.63 0.00 0.00 245014.90 7233.23 242337.56 00:27:43.247 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme3n1 : 1.01 189.74 11.86 0.00 0.00 321288.72 25243.50 292047.83 00:27:43.247 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme4n1 : 1.02 188.05 11.75 0.00 0.00 318495.35 22816.24 313796.08 00:27:43.247 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme5n1 : 1.03 186.41 11.65 0.00 0.00 315382.39 25243.50 282727.16 00:27:43.247 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme6n1 : 1.06 239.48 14.97 0.00 0.00 240650.76 16893.72 265639.25 00:27:43.247 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme7n1 : 1.00 257.18 16.07 0.00 0.00 218486.33 19223.89 245444.46 00:27:43.247 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme8n1 : 1.03 317.94 19.87 0.00 0.00 174159.82 2973.39 208161.75 00:27:43.247 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme9n1 : 1.02 251.50 15.72 0.00 0.00 215377.16 21651.15 250104.79 00:27:43.247 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.247 Verification LBA range: start 0x0 length 0x400 00:27:43.247 Nvme10n1 : 1.05 243.23 15.20 0.00 0.00 219757.04 22719.15 284280.60 00:27:43.247 =================================================================================================================== 00:27:43.247 Total : 2308.27 144.27 0.00 0.00 251553.52 2973.39 371273.58 00:27:43.506 04:44:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2876176 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.439 rmmod nvme_tcp 00:27:44.439 rmmod nvme_fabrics 00:27:44.439 rmmod nvme_keyring 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2876176 ']' 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2876176 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2876176 ']' 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2876176 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:44.439 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2876176 00:27:44.697 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:44.697 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:44.697 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2876176' 00:27:44.697 killing process with pid 2876176 00:27:44.697 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2876176 00:27:44.697 04:44:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2876176 00:27:44.956 04:44:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:44.956 04:44:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:44.956 04:44:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:44.956 04:44:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.956 04:44:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:44.956 04:44:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.956 04:44:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.956 04:44:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.487 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:47.487 00:27:47.487 real 0m7.742s 00:27:47.487 user 0m23.303s 00:27:47.487 sys 0m1.599s 00:27:47.487 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:47.487 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.488 ************************************ 00:27:47.488 END TEST nvmf_shutdown_tc2 00:27:47.488 ************************************ 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:47.488 ************************************ 00:27:47.488 START TEST nvmf_shutdown_tc3 00:27:47.488 ************************************ 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:47.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:47.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:47.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:47.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.488 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:27:47.489 00:27:47.489 --- 10.0.0.2 ping statistics --- 00:27:47.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.489 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:27:47.489 00:27:47.489 --- 10.0.0.1 ping statistics --- 00:27:47.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.489 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2877265 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2877265 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2877265 ']' 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.489 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.489 [2024-07-14 04:44:07.442003] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:47.489 [2024-07-14 04:44:07.442078] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.489 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.489 [2024-07-14 04:44:07.511772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.489 [2024-07-14 04:44:07.604614] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.489 [2024-07-14 04:44:07.604671] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.489 [2024-07-14 04:44:07.604688] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.489 [2024-07-14 04:44:07.604701] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.489 [2024-07-14 04:44:07.604713] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.489 [2024-07-14 04:44:07.604792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.489 [2024-07-14 04:44:07.604904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.489 [2024-07-14 04:44:07.604957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:47.489 [2024-07-14 04:44:07.604960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.747 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:47.747 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:47.747 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.747 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.747 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.748 [2024-07-14 04:44:07.758716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.748 04:44:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.748 Malloc1 00:27:47.748 [2024-07-14 04:44:07.842207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.748 Malloc2 00:27:47.748 Malloc3 00:27:48.006 Malloc4 00:27:48.006 Malloc5 00:27:48.006 Malloc6 00:27:48.006 Malloc7 00:27:48.006 Malloc8 00:27:48.265 Malloc9 00:27:48.265 Malloc10 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2877443 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2877443 /var/tmp/bdevperf.sock 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2877443 ']' 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.265 { 00:27:48.265 "params": { 00:27:48.265 "name": "Nvme$subsystem", 00:27:48.265 "trtype": "$TEST_TRANSPORT", 00:27:48.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.265 "adrfam": "ipv4", 00:27:48.265 "trsvcid": "$NVMF_PORT", 00:27:48.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.265 "hdgst": ${hdgst:-false}, 00:27:48.265 "ddgst": ${ddgst:-false} 00:27:48.265 }, 00:27:48.265 "method": "bdev_nvme_attach_controller" 00:27:48.265 } 00:27:48.265 EOF 00:27:48.265 )") 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.265 { 00:27:48.265 "params": { 00:27:48.265 "name": "Nvme$subsystem", 00:27:48.265 "trtype": "$TEST_TRANSPORT", 00:27:48.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.265 "adrfam": "ipv4", 00:27:48.265 "trsvcid": "$NVMF_PORT", 00:27:48.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.265 "hdgst": ${hdgst:-false}, 00:27:48.265 "ddgst": ${ddgst:-false} 00:27:48.265 }, 00:27:48.265 "method": "bdev_nvme_attach_controller" 00:27:48.265 } 00:27:48.265 EOF 00:27:48.265 )") 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.265 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.266 { 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme$subsystem", 00:27:48.266 "trtype": "$TEST_TRANSPORT", 00:27:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "$NVMF_PORT", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.266 "hdgst": ${hdgst:-false}, 00:27:48.266 "ddgst": ${ddgst:-false} 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 } 00:27:48.266 EOF 00:27:48.266 )") 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.266 { 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme$subsystem", 00:27:48.266 "trtype": "$TEST_TRANSPORT", 00:27:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "$NVMF_PORT", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.266 "hdgst": ${hdgst:-false}, 00:27:48.266 "ddgst": ${ddgst:-false} 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 } 00:27:48.266 EOF 00:27:48.266 )") 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.266 { 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme$subsystem", 00:27:48.266 "trtype": "$TEST_TRANSPORT", 00:27:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "$NVMF_PORT", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.266 "hdgst": ${hdgst:-false}, 00:27:48.266 "ddgst": ${ddgst:-false} 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 } 00:27:48.266 EOF 00:27:48.266 )") 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.266 { 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme$subsystem", 00:27:48.266 "trtype": "$TEST_TRANSPORT", 00:27:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "$NVMF_PORT", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.266 "hdgst": ${hdgst:-false}, 00:27:48.266 "ddgst": ${ddgst:-false} 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 } 00:27:48.266 EOF 00:27:48.266 )") 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.266 { 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme$subsystem", 00:27:48.266 "trtype": "$TEST_TRANSPORT", 00:27:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "$NVMF_PORT", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.266 "hdgst": ${hdgst:-false}, 00:27:48.266 "ddgst": ${ddgst:-false} 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 } 00:27:48.266 EOF 00:27:48.266 )") 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.266 { 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme$subsystem", 00:27:48.266 "trtype": "$TEST_TRANSPORT", 00:27:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "$NVMF_PORT", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.266 "hdgst": ${hdgst:-false}, 00:27:48.266 "ddgst": ${ddgst:-false} 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 } 00:27:48.266 EOF 00:27:48.266 )") 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.266 { 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme$subsystem", 00:27:48.266 "trtype": "$TEST_TRANSPORT", 00:27:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "$NVMF_PORT", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.266 "hdgst": ${hdgst:-false}, 00:27:48.266 "ddgst": ${ddgst:-false} 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 } 00:27:48.266 EOF 00:27:48.266 )") 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.266 { 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme$subsystem", 00:27:48.266 "trtype": "$TEST_TRANSPORT", 00:27:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "$NVMF_PORT", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.266 "hdgst": ${hdgst:-false}, 00:27:48.266 "ddgst": ${ddgst:-false} 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 } 00:27:48.266 EOF 00:27:48.266 )") 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:48.266 04:44:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme1", 00:27:48.266 "trtype": "tcp", 00:27:48.266 "traddr": "10.0.0.2", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "4420", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:48.266 "hdgst": false, 00:27:48.266 "ddgst": false 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 },{ 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme2", 00:27:48.266 "trtype": "tcp", 00:27:48.266 "traddr": "10.0.0.2", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "4420", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:48.266 "hdgst": false, 00:27:48.266 "ddgst": false 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 },{ 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme3", 00:27:48.266 "trtype": "tcp", 00:27:48.266 "traddr": "10.0.0.2", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "4420", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:48.266 "hdgst": false, 00:27:48.266 "ddgst": false 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 },{ 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme4", 00:27:48.266 "trtype": "tcp", 00:27:48.266 "traddr": "10.0.0.2", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "4420", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:48.266 "hdgst": false, 00:27:48.266 "ddgst": false 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 },{ 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme5", 00:27:48.266 "trtype": "tcp", 00:27:48.266 "traddr": "10.0.0.2", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "4420", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:48.266 "hdgst": false, 00:27:48.266 "ddgst": false 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 },{ 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme6", 00:27:48.266 "trtype": "tcp", 00:27:48.266 "traddr": "10.0.0.2", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "4420", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:48.266 "hdgst": false, 00:27:48.266 "ddgst": false 00:27:48.266 }, 00:27:48.266 "method": "bdev_nvme_attach_controller" 00:27:48.266 },{ 00:27:48.266 "params": { 00:27:48.266 "name": "Nvme7", 00:27:48.266 "trtype": "tcp", 00:27:48.266 "traddr": "10.0.0.2", 00:27:48.266 "adrfam": "ipv4", 00:27:48.266 "trsvcid": "4420", 00:27:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:48.266 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:48.266 "hdgst": false, 00:27:48.266 "ddgst": false 00:27:48.267 }, 00:27:48.267 "method": "bdev_nvme_attach_controller" 00:27:48.267 },{ 00:27:48.267 "params": { 00:27:48.267 "name": "Nvme8", 00:27:48.267 "trtype": "tcp", 00:27:48.267 "traddr": "10.0.0.2", 00:27:48.267 "adrfam": "ipv4", 00:27:48.267 "trsvcid": "4420", 00:27:48.267 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:48.267 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:48.267 "hdgst": false, 00:27:48.267 "ddgst": false 00:27:48.267 }, 00:27:48.267 "method": "bdev_nvme_attach_controller" 00:27:48.267 },{ 00:27:48.267 "params": { 00:27:48.267 "name": "Nvme9", 00:27:48.267 "trtype": "tcp", 00:27:48.267 "traddr": "10.0.0.2", 00:27:48.267 "adrfam": "ipv4", 00:27:48.267 "trsvcid": "4420", 00:27:48.267 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:48.267 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:48.267 "hdgst": false, 00:27:48.267 "ddgst": false 00:27:48.267 }, 00:27:48.267 "method": "bdev_nvme_attach_controller" 00:27:48.267 },{ 00:27:48.267 "params": { 00:27:48.267 "name": "Nvme10", 00:27:48.267 "trtype": "tcp", 00:27:48.267 "traddr": "10.0.0.2", 00:27:48.267 "adrfam": "ipv4", 00:27:48.267 "trsvcid": "4420", 00:27:48.267 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:48.267 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:48.267 "hdgst": false, 00:27:48.267 "ddgst": false 00:27:48.267 }, 00:27:48.267 "method": "bdev_nvme_attach_controller" 00:27:48.267 }' 00:27:48.267 [2024-07-14 04:44:08.343219] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:48.267 [2024-07-14 04:44:08.343295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877443 ] 00:27:48.267 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.267 [2024-07-14 04:44:08.406248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.524 [2024-07-14 04:44:08.493885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.423 Running I/O for 10 seconds... 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:50.423 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=9 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 9 -ge 100 ']' 00:27:50.685 04:44:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:50.960 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2877265 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 2877265 ']' 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 2877265 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:51.220 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2877265 00:27:51.491 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:51.491 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:51.491 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2877265' 00:27:51.491 killing process with pid 2877265 00:27:51.491 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 2877265 00:27:51.491 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 2877265 00:27:51.491 [2024-07-14 04:44:11.436380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.436984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.437005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.437017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.437030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.437042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.437054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.491 [2024-07-14 04:44:11.437066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.437082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.437095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.437107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.437120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.437132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.437144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.437156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.437167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.437179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e578c0 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.438824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.438857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.438890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.438913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.438941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.438963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.438977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.438995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.439612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4bf30 is same with the state(5) to be set 00:27:51.492 [2024-07-14 04:44:11.440009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.440980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.440993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.441009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.441026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.441041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.441056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.441071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.493 [2024-07-14 04:44:11.441085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.493 [2024-07-14 04:44:11.441100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with [2024-07-14 04:44:11.441317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:51.494 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with [2024-07-14 04:44:11.441370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1the state(5) to be set 00:27:51.494 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-07-14 04:44:11.441435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with [2024-07-14 04:44:11.441464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:51.494 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with [2024-07-14 04:44:11.441540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(5) to be set 00:27:51.494 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 04:44:11.441590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with [2024-07-14 04:44:11.441663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1the state(5) to be set 00:27:51.494 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.494 [2024-07-14 04:44:11.441677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.494 [2024-07-14 04:44:11.441679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.494 [2024-07-14 04:44:11.441689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.441702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 [2024-07-14 04:44:11.441714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.441726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 04:44:11.441738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.441766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 [2024-07-14 04:44:11.441778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.441790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 [2024-07-14 04:44:11.441803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-07-14 04:44:11.441815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 04:44:11.441829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.441855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 [2024-07-14 04:44:11.441890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.441906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with [2024-07-14 04:44:11.441922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:51.495 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 [2024-07-14 04:44:11.441937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.441949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with [2024-07-14 04:44:11.441954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:51.495 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 [2024-07-14 04:44:11.441969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.441982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.441994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 04:44:11.441995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.442022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 [2024-07-14 04:44:11.442035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.495 [2024-07-14 04:44:11.442047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.495 [2024-07-14 04:44:11.442060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c3d0 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.442160] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x107aff0 was disconnected and freed. reset controller. 00:27:51.495 [2024-07-14 04:44:11.443413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.495 [2024-07-14 04:44:11.443813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.443834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.443884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.443910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.443935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.443960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.443983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.444892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.445073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.445150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.445175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.445198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.445215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-14 04:44:11.445218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:51.496 the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.445246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.496 [2024-07-14 04:44:11.445258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with [2024-07-14 04:44:11.445265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:27:51.496 id:0 cdw10:00000000 cdw11:00000000 00:27:51.496 [2024-07-14 04:44:11.445283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.496 [2024-07-14 04:44:11.445286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.445297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.496 [2024-07-14 04:44:11.445317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.496 [2024-07-14 04:44:11.445332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.496 [2024-07-14 04:44:11.445346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.496 [2024-07-14 04:44:11.445359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b7a60 is same with the state(5) to be set 00:27:51.496 [2024-07-14 04:44:11.445407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.496 [2024-07-14 04:44:11.445428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.496 [2024-07-14 04:44:11.445445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.496 [2024-07-14 04:44:11.445458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.496 [2024-07-14 04:44:11.445472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with [2024-07-14 04:44:11.445500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:27:51.497 id:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.445529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6cc0 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.445543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.445566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.445586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-14 04:44:11.445586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:51.497 the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.445610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.445626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 04:44:11.445636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.445657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4c890 is same with [2024-07-14 04:44:11.445671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:51.497 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568ec0 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.445766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1077df0 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.445939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.445974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.445987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.446002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.446015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.446029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.497 [2024-07-14 04:44:11.446043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.497 [2024-07-14 04:44:11.446056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0400 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.446903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.446945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.446960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.446973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.497 [2024-07-14 04:44:11.447359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.447991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4cd30 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.448572] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:51.498 [2024-07-14 04:44:11.448618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1077df0 (9): Bad file descriptor 00:27:51.498 [2024-07-14 04:44:11.450090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.498 [2024-07-14 04:44:11.450577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.450895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6b00 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.451220] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:51.499 [2024-07-14 04:44:11.451550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.499 [2024-07-14 04:44:11.451579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1077df0 with addr=10.0.0.2, port=4420 00:27:51.499 [2024-07-14 04:44:11.451596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1077df0 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.451660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.451683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.451706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.451722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.451738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.451753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.451769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.451783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.451804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.451819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.451835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.451849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.451864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c320 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.451974] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x107c320 was disconnected and freed. reset controller. 00:27:51.499 [2024-07-14 04:44:11.452739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56ac0 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.452769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56ac0 is same with the state(5) to be set 00:27:51.499 [2024-07-14 04:44:11.453122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.453148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.453168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.453184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.453200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.453214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.453229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.453244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.453259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.453272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.453288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.453301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.453316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.453330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.453345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.453359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.499 [2024-07-14 04:44:11.453374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.499 [2024-07-14 04:44:11.453398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 04:44:11.453663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.453721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128the state(5) to be set 00:27:51.500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.453798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:51.500 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.453936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.453961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.453970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.454001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.454001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.454015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.454024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.454030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:12the state(5) to be set 00:27:51.500 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.454049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.454052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.454065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.454079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.454076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.454095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 [2024-07-14 04:44:11.454101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.454109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.500 [2024-07-14 04:44:11.454126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:12[2024-07-14 04:44:11.454124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.500 the state(5) to be set 00:27:51.500 [2024-07-14 04:44:11.454146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.454220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:51.501 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.454251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:51.501 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.454298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:12the state(5) to be set 00:27:51.501 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.454417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:12the state(5) to be set 00:27:51.501 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 04:44:11.454482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.454539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:12the state(5) to be set 00:27:51.501 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 04:44:11.454649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.501 [2024-07-14 04:44:11.454790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.501 [2024-07-14 04:44:11.454788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.501 [2024-07-14 04:44:11.454804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.454813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.454819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:12the state(5) to be set 00:27:51.502 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.454834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.454836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.454849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.454863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.454860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.454901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.454911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.454916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:51.502 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.454940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.454940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.454954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.454966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.454970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:12the state(5) to be set 00:27:51.502 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.454988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.454990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.455003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.455023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.455053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.455068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:12the state(5) to be set 00:27:51.502 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.455112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.455147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with [2024-07-14 04:44:11.455163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:12the state(5) to be set 00:27:51.502 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.455218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56f60 is same with the state(5) to be set 00:27:51.502 [2024-07-14 04:44:11.455230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.502 [2024-07-14 04:44:11.455303] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x155a430 was disconnected and freed. reset controller. 00:27:51.502 [2024-07-14 04:44:11.455749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.455978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.502 [2024-07-14 04:44:11.455994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.502 [2024-07-14 04:44:11.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:12[2024-07-14 04:44:11.456414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57400 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 the state(5) to be set 00:27:51.503 [2024-07-14 04:44:11.456439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57400 is same with the state(5) to be set 00:27:51.503 [2024-07-14 04:44:11.456455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57400 is same with the state(5) to be set 00:27:51.503 [2024-07-14 04:44:11.456471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.456966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.456991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.457007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.457021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.457037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.457050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.457065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.457079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.503 [2024-07-14 04:44:11.457094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.503 [2024-07-14 04:44:11.457108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.457728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.504 [2024-07-14 04:44:11.457745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.479992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616e20 is same with the state(5) to be set 00:27:51.504 [2024-07-14 04:44:11.480820] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1616e20 was disconnected and freed. reset controller. 00:27:51.504 [2024-07-14 04:44:11.480927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1077df0 (9): Bad file descriptor 00:27:51.504 [2024-07-14 04:44:11.481070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.481113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.481141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.481168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.481194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d5f30 is same with the state(5) to be set 00:27:51.504 [2024-07-14 04:44:11.481254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.481290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.481317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.481345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.481371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d7de0 is same with the state(5) to be set 00:27:51.504 [2024-07-14 04:44:11.481399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b7a60 (9): Bad file descriptor 00:27:51.504 [2024-07-14 04:44:11.481430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b6cc0 (9): Bad file descriptor 00:27:51.504 [2024-07-14 04:44:11.481481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.504 [2024-07-14 04:44:11.481529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.504 [2024-07-14 04:44:11.481544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631110 is same with the state(5) to be set 00:27:51.505 [2024-07-14 04:44:11.481642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1568ec0 (9): Bad file descriptor 00:27:51.505 [2024-07-14 04:44:11.481669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b0400 (9): Bad file descriptor 00:27:51.505 [2024-07-14 04:44:11.481712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d0e60 is same with the state(5) to be set 00:27:51.505 [2024-07-14 04:44:11.481873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.505 [2024-07-14 04:44:11.481981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.481994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652cf0 is same with the state(5) to be set 00:27:51.505 [2024-07-14 04:44:11.482022] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.505 [2024-07-14 04:44:11.482997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.505 [2024-07-14 04:44:11.483558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.505 [2024-07-14 04:44:11.483572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.483974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.483990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.506 [2024-07-14 04:44:11.484563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.506 [2024-07-14 04:44:11.484577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.484929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.484943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.485039] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1556470 was disconnected and freed. reset controller. 00:27:51.507 [2024-07-14 04:44:11.485146] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:51.507 [2024-07-14 04:44:11.485221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.485242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.485263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.485278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.485295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.485309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.485324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.485338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.485353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148bef0 is same with the state(5) to be set 00:27:51.507 [2024-07-14 04:44:11.485436] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x148bef0 was disconnected and freed. reset controller. 00:27:51.507 [2024-07-14 04:44:11.486814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.486839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.486861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.486886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.486904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.486918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.486934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.486948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.486964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.486984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.487000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.487014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.487030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.487044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.487060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.487073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.507 [2024-07-14 04:44:11.487089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.507 [2024-07-14 04:44:11.487103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.487985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.487999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.488014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.488028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.488044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.488057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.488072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.488087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.508 [2024-07-14 04:44:11.488106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.508 [2024-07-14 04:44:11.488120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.488750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.488843] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1615920 was disconnected and freed. reset controller. 00:27:51.509 [2024-07-14 04:44:11.489998] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:51.509 [2024-07-14 04:44:11.490063] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:51.509 [2024-07-14 04:44:11.490082] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:51.509 [2024-07-14 04:44:11.490099] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:51.509 [2024-07-14 04:44:11.493449] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:51.509 [2024-07-14 04:44:11.493492] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:51.509 [2024-07-14 04:44:11.493519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.509 [2024-07-14 04:44:11.493556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d0e60 (9): Bad file descriptor 00:27:51.509 [2024-07-14 04:44:11.493784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.509 [2024-07-14 04:44:11.493812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b0400 with addr=10.0.0.2, port=4420 00:27:51.509 [2024-07-14 04:44:11.493830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0400 is same with the state(5) to be set 00:27:51.509 [2024-07-14 04:44:11.493853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d5f30 (9): Bad file descriptor 00:27:51.509 [2024-07-14 04:44:11.493888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d7de0 (9): Bad file descriptor 00:27:51.509 [2024-07-14 04:44:11.493919] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.509 [2024-07-14 04:44:11.493954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1631110 (9): Bad file descriptor 00:27:51.509 [2024-07-14 04:44:11.493989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1652cf0 (9): Bad file descriptor 00:27:51.509 [2024-07-14 04:44:11.494982] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:51.509 [2024-07-14 04:44:11.495011] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:51.509 [2024-07-14 04:44:11.495203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.509 [2024-07-14 04:44:11.495230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1568ec0 with addr=10.0.0.2, port=4420 00:27:51.509 [2024-07-14 04:44:11.495246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568ec0 is same with the state(5) to be set 00:27:51.509 [2024-07-14 04:44:11.495274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b0400 (9): Bad file descriptor 00:27:51.509 [2024-07-14 04:44:11.495626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.495652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.495677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.495693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.495709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.509 [2024-07-14 04:44:11.495724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.509 [2024-07-14 04:44:11.495740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.495754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.495770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.495784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.495800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.495819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.495836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.495850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.495874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.495890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.495907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.495921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.495937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.495951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.495966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.495980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.495995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.510 [2024-07-14 04:44:11.496707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.510 [2024-07-14 04:44:11.496723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.496736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.496752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.496766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.496782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.496796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.496812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.496826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.496841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.496855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.496878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.496893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.496909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.496923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.496939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.496953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.496968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.496987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.511 [2024-07-14 04:44:11.497582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.511 [2024-07-14 04:44:11.497596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15579e0 is same with the state(5) to be set 00:27:51.511 [2024-07-14 04:44:11.499139] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:51.511 [2024-07-14 04:44:11.499527] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:51.511 [2024-07-14 04:44:11.499557] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:51.511 [2024-07-14 04:44:11.499576] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:51.511 [2024-07-14 04:44:11.499761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.511 [2024-07-14 04:44:11.499789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d0e60 with addr=10.0.0.2, port=4420 00:27:51.511 [2024-07-14 04:44:11.499806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d0e60 is same with the state(5) to be set 00:27:51.511 [2024-07-14 04:44:11.499965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.511 [2024-07-14 04:44:11.499991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b7a60 with addr=10.0.0.2, port=4420 00:27:51.511 [2024-07-14 04:44:11.500007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b7a60 is same with the state(5) to be set 00:27:51.511 [2024-07-14 04:44:11.500324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.512 [2024-07-14 04:44:11.500348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d5f30 with addr=10.0.0.2, port=4420 00:27:51.512 [2024-07-14 04:44:11.500364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d5f30 is same with the state(5) to be set 00:27:51.512 [2024-07-14 04:44:11.500385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1568ec0 (9): Bad file descriptor 00:27:51.512 [2024-07-14 04:44:11.500403] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:51.512 [2024-07-14 04:44:11.500417] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:51.512 [2024-07-14 04:44:11.500434] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:51.512 [2024-07-14 04:44:11.500601] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:51.512 [2024-07-14 04:44:11.500650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.512 [2024-07-14 04:44:11.500809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.512 [2024-07-14 04:44:11.500834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1652cf0 with addr=10.0.0.2, port=4420 00:27:51.512 [2024-07-14 04:44:11.500850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652cf0 is same with the state(5) to be set 00:27:51.512 [2024-07-14 04:44:11.501008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.512 [2024-07-14 04:44:11.501033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1077df0 with addr=10.0.0.2, port=4420 00:27:51.512 [2024-07-14 04:44:11.501048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1077df0 is same with the state(5) to be set 00:27:51.512 [2024-07-14 04:44:11.501199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.512 [2024-07-14 04:44:11.501223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b6cc0 with addr=10.0.0.2, port=4420 00:27:51.512 [2024-07-14 04:44:11.501237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6cc0 is same with the state(5) to be set 00:27:51.512 [2024-07-14 04:44:11.501256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d0e60 (9): Bad file descriptor 00:27:51.512 [2024-07-14 04:44:11.501275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b7a60 (9): Bad file descriptor 00:27:51.512 [2024-07-14 04:44:11.501292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d5f30 (9): Bad file descriptor 00:27:51.512 [2024-07-14 04:44:11.501308] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:51.512 [2024-07-14 04:44:11.501320] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:51.512 [2024-07-14 04:44:11.501333] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:51.512 [2024-07-14 04:44:11.501678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.512 [2024-07-14 04:44:11.501709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1652cf0 (9): Bad file descriptor 00:27:51.512 [2024-07-14 04:44:11.501730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1077df0 (9): Bad file descriptor 00:27:51.512 [2024-07-14 04:44:11.501747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b6cc0 (9): Bad file descriptor 00:27:51.512 [2024-07-14 04:44:11.501763] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:51.512 [2024-07-14 04:44:11.501776] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:51.512 [2024-07-14 04:44:11.501794] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:51.512 [2024-07-14 04:44:11.501812] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:51.512 [2024-07-14 04:44:11.501827] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:51.512 [2024-07-14 04:44:11.501840] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:51.512 [2024-07-14 04:44:11.501856] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:51.512 [2024-07-14 04:44:11.501877] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:51.512 [2024-07-14 04:44:11.501891] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:51.512 [2024-07-14 04:44:11.501940] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:51.512 [2024-07-14 04:44:11.501963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.512 [2024-07-14 04:44:11.501977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.512 [2024-07-14 04:44:11.501989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.512 [2024-07-14 04:44:11.502009] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:51.512 [2024-07-14 04:44:11.502024] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:51.512 [2024-07-14 04:44:11.502038] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:51.512 [2024-07-14 04:44:11.502055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:51.512 [2024-07-14 04:44:11.502068] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:51.512 [2024-07-14 04:44:11.502082] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:51.512 [2024-07-14 04:44:11.502098] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:51.512 [2024-07-14 04:44:11.502111] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:51.512 [2024-07-14 04:44:11.502123] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:51.512 [2024-07-14 04:44:11.502169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.512 [2024-07-14 04:44:11.502187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.512 [2024-07-14 04:44:11.502199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.512 [2024-07-14 04:44:11.502347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.512 [2024-07-14 04:44:11.502373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b0400 with addr=10.0.0.2, port=4420 00:27:51.512 [2024-07-14 04:44:11.502388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0400 is same with the state(5) to be set 00:27:51.513 [2024-07-14 04:44:11.502427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b0400 (9): Bad file descriptor 00:27:51.513 [2024-07-14 04:44:11.502464] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:51.513 [2024-07-14 04:44:11.502480] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:51.513 [2024-07-14 04:44:11.502493] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:51.513 [2024-07-14 04:44:11.502535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.513 [2024-07-14 04:44:11.503577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.503974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.503993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.513 [2024-07-14 04:44:11.504528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.513 [2024-07-14 04:44:11.504543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.504979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.504995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.514 [2024-07-14 04:44:11.505526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.514 [2024-07-14 04:44:11.505541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558f30 is same with the state(5) to be set 00:27:51.514 [2024-07-14 04:44:11.506828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.506852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.506878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.506896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.506912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.506927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.506943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.506957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.506973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.506987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.507391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.507405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.518987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.515 [2024-07-14 04:44:11.519502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.515 [2024-07-14 04:44:11.519516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.519983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.519998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.516 [2024-07-14 04:44:11.520403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.516 [2024-07-14 04:44:11.520418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155b930 is same with the state(5) to be set 00:27:51.516 [2024-07-14 04:44:11.522170] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:51.516 task offset: 25344 on job bdev=Nvme1n1 fails 00:27:51.516 00:27:51.516 Latency(us) 00:27:51.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.516 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.516 Job: Nvme1n1 ended in about 1.21 seconds with error 00:27:51.516 Verification LBA range: start 0x0 length 0x400 00:27:51.516 Nvme1n1 : 1.21 158.84 9.93 52.95 0.00 299595.47 10777.03 388361.48 00:27:51.516 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.516 Job: Nvme2n1 ended in about 1.24 seconds with error 00:27:51.516 Verification LBA range: start 0x0 length 0x400 00:27:51.516 Nvme2n1 : 1.24 204.92 12.81 4.82 0.00 291021.50 29127.11 243891.01 00:27:51.517 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.517 Job: Nvme3n1 ended in about 1.25 seconds with error 00:27:51.517 Verification LBA range: start 0x0 length 0x400 00:27:51.517 Nvme3n1 : 1.25 211.54 13.22 51.09 0.00 234501.87 26991.12 234570.33 00:27:51.517 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.517 Job: Nvme4n1 ended in about 1.26 seconds with error 00:27:51.517 Verification LBA range: start 0x0 length 0x400 00:27:51.517 Nvme4n1 : 1.26 203.15 12.70 50.79 0.00 238892.37 20874.43 260978.92 00:27:51.517 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.517 Job: Nvme5n1 ended in about 1.25 seconds with error 00:27:51.517 Verification LBA range: start 0x0 length 0x400 00:27:51.517 Nvme5n1 : 1.25 149.97 9.37 3.19 0.00 378000.62 40389.59 327777.09 00:27:51.517 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.517 Job: Nvme6n1 ended in about 1.27 seconds with error 00:27:51.517 Verification LBA range: start 0x0 length 0x400 00:27:51.517 Nvme6n1 : 1.27 151.41 9.46 50.47 0.00 291469.65 21748.24 265639.25 00:27:51.517 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.517 Job: Nvme7n1 ended in about 1.25 seconds with error 00:27:51.517 Verification LBA range: start 0x0 length 0x400 00:27:51.517 Nvme7n1 : 1.25 205.11 12.82 51.28 0.00 225600.85 22330.79 257872.02 00:27:51.517 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.517 Job: Nvme8n1 ended in about 1.28 seconds with error 00:27:51.517 Verification LBA range: start 0x0 length 0x400 00:27:51.517 Nvme8n1 : 1.28 199.54 12.47 49.88 0.00 228995.53 20486.07 271853.04 00:27:51.517 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.517 Job: Nvme9n1 ended in about 1.25 seconds with error 00:27:51.517 Verification LBA range: start 0x0 length 0x400 00:27:51.517 Nvme9n1 : 1.25 153.00 9.56 51.00 0.00 274892.80 36700.16 304475.40 00:27:51.517 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.517 Job: Nvme10n1 ended in about 1.25 seconds with error 00:27:51.517 Verification LBA range: start 0x0 length 0x400 00:27:51.517 Nvme10n1 : 1.25 207.76 12.99 51.14 0.00 212974.02 19126.80 253211.69 00:27:51.517 =================================================================================================================== 00:27:51.517 Total : 1845.22 115.33 416.61 0.00 260511.31 10777.03 388361.48 00:27:51.517 [2024-07-14 04:44:11.550795] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:51.517 [2024-07-14 04:44:11.550893] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:51.517 [2024-07-14 04:44:11.551771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.551808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d7de0 with addr=10.0.0.2, port=4420 00:27:51.517 [2024-07-14 04:44:11.551831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d7de0 is same with the state(5) to be set 00:27:51.517 [2024-07-14 04:44:11.552043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.552070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1631110 with addr=10.0.0.2, port=4420 00:27:51.517 [2024-07-14 04:44:11.552086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631110 is same with the state(5) to be set 00:27:51.517 [2024-07-14 04:44:11.552120] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.517 [2024-07-14 04:44:11.552141] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.517 [2024-07-14 04:44:11.552160] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.517 [2024-07-14 04:44:11.552178] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.517 [2024-07-14 04:44:11.552195] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.517 [2024-07-14 04:44:11.552212] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.517 [2024-07-14 04:44:11.552229] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.517 [2024-07-14 04:44:11.552246] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.517 [2024-07-14 04:44:11.552807] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:51.517 [2024-07-14 04:44:11.552836] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:51.517 [2024-07-14 04:44:11.552853] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:51.517 [2024-07-14 04:44:11.552877] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:51.517 [2024-07-14 04:44:11.552894] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:51.517 [2024-07-14 04:44:11.552910] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:51.517 [2024-07-14 04:44:11.552925] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:51.517 [2024-07-14 04:44:11.552994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d7de0 (9): Bad file descriptor 00:27:51.517 [2024-07-14 04:44:11.553022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1631110 (9): Bad file descriptor 00:27:51.517 [2024-07-14 04:44:11.553096] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:51.517 [2024-07-14 04:44:11.553299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.553326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1568ec0 with addr=10.0.0.2, port=4420 00:27:51.517 [2024-07-14 04:44:11.553342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568ec0 is same with the state(5) to be set 00:27:51.517 [2024-07-14 04:44:11.553660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.553684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d5f30 with addr=10.0.0.2, port=4420 00:27:51.517 [2024-07-14 04:44:11.553699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d5f30 is same with the state(5) to be set 00:27:51.517 [2024-07-14 04:44:11.553847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.553893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b7a60 with addr=10.0.0.2, port=4420 00:27:51.517 [2024-07-14 04:44:11.553910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b7a60 is same with the state(5) to be set 00:27:51.517 [2024-07-14 04:44:11.554055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.554079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d0e60 with addr=10.0.0.2, port=4420 00:27:51.517 [2024-07-14 04:44:11.554094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d0e60 is same with the state(5) to be set 00:27:51.517 [2024-07-14 04:44:11.554407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.554431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b6cc0 with addr=10.0.0.2, port=4420 00:27:51.517 [2024-07-14 04:44:11.554446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6cc0 is same with the state(5) to be set 00:27:51.517 [2024-07-14 04:44:11.554602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.554626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1077df0 with addr=10.0.0.2, port=4420 00:27:51.517 [2024-07-14 04:44:11.554640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1077df0 is same with the state(5) to be set 00:27:51.517 [2024-07-14 04:44:11.554786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.554809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1652cf0 with addr=10.0.0.2, port=4420 00:27:51.517 [2024-07-14 04:44:11.554824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652cf0 is same with the state(5) to be set 00:27:51.517 [2024-07-14 04:44:11.554839] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:51.517 [2024-07-14 04:44:11.554852] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:51.517 [2024-07-14 04:44:11.554877] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:51.517 [2024-07-14 04:44:11.554898] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:51.517 [2024-07-14 04:44:11.554912] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:51.517 [2024-07-14 04:44:11.554926] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:51.517 [2024-07-14 04:44:11.554990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.517 [2024-07-14 04:44:11.555011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.517 [2024-07-14 04:44:11.555168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.517 [2024-07-14 04:44:11.555193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b0400 with addr=10.0.0.2, port=4420 00:27:51.518 [2024-07-14 04:44:11.555208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0400 is same with the state(5) to be set 00:27:51.518 [2024-07-14 04:44:11.555226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1568ec0 (9): Bad file descriptor 00:27:51.518 [2024-07-14 04:44:11.555245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d5f30 (9): Bad file descriptor 00:27:51.518 [2024-07-14 04:44:11.555262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b7a60 (9): Bad file descriptor 00:27:51.518 [2024-07-14 04:44:11.555278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d0e60 (9): Bad file descriptor 00:27:51.518 [2024-07-14 04:44:11.555294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b6cc0 (9): Bad file descriptor 00:27:51.518 [2024-07-14 04:44:11.555310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1077df0 (9): Bad file descriptor 00:27:51.518 [2024-07-14 04:44:11.555327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1652cf0 (9): Bad file descriptor 00:27:51.518 [2024-07-14 04:44:11.555367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b0400 (9): Bad file descriptor 00:27:51.518 [2024-07-14 04:44:11.555388] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:51.518 [2024-07-14 04:44:11.555402] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:51.518 [2024-07-14 04:44:11.555414] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:51.518 [2024-07-14 04:44:11.555431] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:51.518 [2024-07-14 04:44:11.555444] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:51.518 [2024-07-14 04:44:11.555456] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:51.518 [2024-07-14 04:44:11.555470] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:51.518 [2024-07-14 04:44:11.555483] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:51.518 [2024-07-14 04:44:11.555495] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:51.518 [2024-07-14 04:44:11.555510] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:51.518 [2024-07-14 04:44:11.555523] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:51.518 [2024-07-14 04:44:11.555535] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:51.518 [2024-07-14 04:44:11.555551] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:51.518 [2024-07-14 04:44:11.555564] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:51.518 [2024-07-14 04:44:11.555576] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:51.518 [2024-07-14 04:44:11.555591] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:51.518 [2024-07-14 04:44:11.555604] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:51.518 [2024-07-14 04:44:11.555619] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:51.518 [2024-07-14 04:44:11.555635] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:51.518 [2024-07-14 04:44:11.555653] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:51.518 [2024-07-14 04:44:11.555666] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:51.518 [2024-07-14 04:44:11.555703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.518 [2024-07-14 04:44:11.555722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.518 [2024-07-14 04:44:11.555734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.518 [2024-07-14 04:44:11.555745] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.518 [2024-07-14 04:44:11.555757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.518 [2024-07-14 04:44:11.555768] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.518 [2024-07-14 04:44:11.555780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.518 [2024-07-14 04:44:11.555791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:51.518 [2024-07-14 04:44:11.555804] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:51.518 [2024-07-14 04:44:11.555817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:51.518 [2024-07-14 04:44:11.555884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.084 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:52.084 04:44:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2877443 00:27:53.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2877443) - No such process 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:53.020 rmmod nvme_tcp 00:27:53.020 rmmod nvme_fabrics 00:27:53.020 rmmod nvme_keyring 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.020 04:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.916 04:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:54.916 00:27:54.916 real 0m7.906s 00:27:54.916 user 0m19.846s 00:27:54.916 sys 0m1.700s 00:27:54.916 04:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:54.916 04:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.916 ************************************ 00:27:54.916 END TEST nvmf_shutdown_tc3 00:27:54.916 ************************************ 00:27:55.173 04:44:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:55.173 00:27:55.173 real 0m27.892s 00:27:55.173 user 1m18.452s 00:27:55.173 sys 0m6.781s 00:27:55.173 04:44:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:55.173 04:44:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.173 ************************************ 00:27:55.173 END TEST nvmf_shutdown 00:27:55.173 ************************************ 00:27:55.173 04:44:15 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:55.173 04:44:15 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.173 04:44:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.173 04:44:15 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:55.173 04:44:15 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:55.173 04:44:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.173 04:44:15 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:55.173 04:44:15 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:55.173 04:44:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:55.173 04:44:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:55.173 04:44:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.173 ************************************ 00:27:55.173 START TEST nvmf_multicontroller 00:27:55.173 ************************************ 00:27:55.173 04:44:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:55.173 * Looking for test storage... 00:27:55.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.173 04:44:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.173 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:55.173 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.173 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.174 04:44:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:57.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:57.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:57.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:57.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.091 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:57.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:27:57.350 00:27:57.350 --- 10.0.0.2 ping statistics --- 00:27:57.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.350 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:27:57.350 00:27:57.350 --- 10.0.0.1 ping statistics --- 00:27:57.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.350 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:57.350 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2879966 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2879966 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2879966 ']' 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:57.351 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.351 [2024-07-14 04:44:17.440802] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:57.351 [2024-07-14 04:44:17.440903] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.351 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.351 [2024-07-14 04:44:17.506243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:57.609 [2024-07-14 04:44:17.596580] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.609 [2024-07-14 04:44:17.596639] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.609 [2024-07-14 04:44:17.596668] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.609 [2024-07-14 04:44:17.596680] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.609 [2024-07-14 04:44:17.596690] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.609 [2024-07-14 04:44:17.596785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.609 [2024-07-14 04:44:17.596851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.609 [2024-07-14 04:44:17.596853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.609 [2024-07-14 04:44:17.738222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.609 Malloc0 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.609 [2024-07-14 04:44:17.795396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.609 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.868 [2024-07-14 04:44:17.803269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.868 Malloc1 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2880002 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2880002 /var/tmp/bdevperf.sock 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2880002 ']' 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:57.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:57.868 04:44:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.127 NVMe0n1 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.127 1 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.127 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.385 request: 00:27:58.385 { 00:27:58.385 "name": "NVMe0", 00:27:58.385 "trtype": "tcp", 00:27:58.385 "traddr": "10.0.0.2", 00:27:58.385 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:58.385 "hostaddr": "10.0.0.2", 00:27:58.385 "hostsvcid": "60000", 00:27:58.385 "adrfam": "ipv4", 00:27:58.385 "trsvcid": "4420", 00:27:58.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.385 "method": "bdev_nvme_attach_controller", 00:27:58.385 "req_id": 1 00:27:58.385 } 00:27:58.385 Got JSON-RPC error response 00:27:58.385 response: 00:27:58.386 { 00:27:58.386 "code": -114, 00:27:58.386 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:58.386 } 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.386 request: 00:27:58.386 { 00:27:58.386 "name": "NVMe0", 00:27:58.386 "trtype": "tcp", 00:27:58.386 "traddr": "10.0.0.2", 00:27:58.386 "hostaddr": "10.0.0.2", 00:27:58.386 "hostsvcid": "60000", 00:27:58.386 "adrfam": "ipv4", 00:27:58.386 "trsvcid": "4420", 00:27:58.386 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:58.386 "method": "bdev_nvme_attach_controller", 00:27:58.386 "req_id": 1 00:27:58.386 } 00:27:58.386 Got JSON-RPC error response 00:27:58.386 response: 00:27:58.386 { 00:27:58.386 "code": -114, 00:27:58.386 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:58.386 } 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.386 request: 00:27:58.386 { 00:27:58.386 "name": "NVMe0", 00:27:58.386 "trtype": "tcp", 00:27:58.386 "traddr": "10.0.0.2", 00:27:58.386 "hostaddr": "10.0.0.2", 00:27:58.386 "hostsvcid": "60000", 00:27:58.386 "adrfam": "ipv4", 00:27:58.386 "trsvcid": "4420", 00:27:58.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.386 "multipath": "disable", 00:27:58.386 "method": "bdev_nvme_attach_controller", 00:27:58.386 "req_id": 1 00:27:58.386 } 00:27:58.386 Got JSON-RPC error response 00:27:58.386 response: 00:27:58.386 { 00:27:58.386 "code": -114, 00:27:58.386 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:58.386 } 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.386 request: 00:27:58.386 { 00:27:58.386 "name": "NVMe0", 00:27:58.386 "trtype": "tcp", 00:27:58.386 "traddr": "10.0.0.2", 00:27:58.386 "hostaddr": "10.0.0.2", 00:27:58.386 "hostsvcid": "60000", 00:27:58.386 "adrfam": "ipv4", 00:27:58.386 "trsvcid": "4420", 00:27:58.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.386 "multipath": "failover", 00:27:58.386 "method": "bdev_nvme_attach_controller", 00:27:58.386 "req_id": 1 00:27:58.386 } 00:27:58.386 Got JSON-RPC error response 00:27:58.386 response: 00:27:58.386 { 00:27:58.386 "code": -114, 00:27:58.386 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:58.386 } 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.386 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.386 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:58.386 04:44:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:59.760 0 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2880002 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2880002 ']' 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2880002 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2880002 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2880002' 00:27:59.760 killing process with pid 2880002 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2880002 00:27:59.760 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2880002 00:28:00.018 04:44:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.018 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.018 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.018 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:00.019 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:00.019 [2024-07-14 04:44:17.908134] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:00.019 [2024-07-14 04:44:17.908227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880002 ] 00:28:00.019 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.019 [2024-07-14 04:44:17.967783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.019 [2024-07-14 04:44:18.057876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.019 [2024-07-14 04:44:18.554254] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 9aef51c0-4d22-4c9a-99dc-3bbdad3cce7e already exists 00:28:00.019 [2024-07-14 04:44:18.554301] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:9aef51c0-4d22-4c9a-99dc-3bbdad3cce7e alias for bdev NVMe1n1 00:28:00.019 [2024-07-14 04:44:18.554337] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:00.019 Running I/O for 1 seconds... 00:28:00.019 00:28:00.019 Latency(us) 00:28:00.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.019 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:00.019 NVMe0n1 : 1.01 18898.58 73.82 0.00 0.00 6755.07 4199.16 14563.56 00:28:00.019 =================================================================================================================== 00:28:00.019 Total : 18898.58 73.82 0.00 0.00 6755.07 4199.16 14563.56 00:28:00.019 Received shutdown signal, test time was about 1.000000 seconds 00:28:00.019 00:28:00.019 Latency(us) 00:28:00.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.019 =================================================================================================================== 00:28:00.019 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.019 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:00.019 04:44:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:00.019 rmmod nvme_tcp 00:28:00.019 rmmod nvme_fabrics 00:28:00.019 rmmod nvme_keyring 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2879966 ']' 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2879966 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2879966 ']' 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2879966 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2879966 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2879966' 00:28:00.019 killing process with pid 2879966 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2879966 00:28:00.019 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2879966 00:28:00.277 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:00.277 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:00.277 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:00.277 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.277 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:00.277 04:44:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.277 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.277 04:44:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.201 04:44:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:02.201 00:28:02.201 real 0m7.175s 00:28:02.201 user 0m10.914s 00:28:02.201 sys 0m2.312s 00:28:02.201 04:44:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:02.201 04:44:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.201 ************************************ 00:28:02.201 END TEST nvmf_multicontroller 00:28:02.201 ************************************ 00:28:02.460 04:44:22 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:02.460 04:44:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:02.460 04:44:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:02.460 04:44:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.460 ************************************ 00:28:02.460 START TEST nvmf_aer 00:28:02.460 ************************************ 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:02.460 * Looking for test storage... 00:28:02.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:02.460 04:44:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:04.385 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:04.385 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:04.385 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:04.385 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:04.385 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:04.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:28:04.386 00:28:04.386 --- 10.0.0.2 ping statistics --- 00:28:04.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.386 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:28:04.386 00:28:04.386 --- 10.0.0.1 ping statistics --- 00:28:04.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.386 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2882196 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2882196 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 2882196 ']' 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:04.386 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.645 [2024-07-14 04:44:24.597933] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:04.645 [2024-07-14 04:44:24.598021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.645 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.645 [2024-07-14 04:44:24.662925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.645 [2024-07-14 04:44:24.749759] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.645 [2024-07-14 04:44:24.749809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.645 [2024-07-14 04:44:24.749841] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.645 [2024-07-14 04:44:24.749853] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.645 [2024-07-14 04:44:24.749863] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.645 [2024-07-14 04:44:24.749941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.645 [2024-07-14 04:44:24.750006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.645 [2024-07-14 04:44:24.750054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.645 [2024-07-14 04:44:24.750057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.904 [2024-07-14 04:44:24.901702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.904 Malloc0 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.904 [2024-07-14 04:44:24.954057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.904 [ 00:28:04.904 { 00:28:04.904 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:04.904 "subtype": "Discovery", 00:28:04.904 "listen_addresses": [], 00:28:04.904 "allow_any_host": true, 00:28:04.904 "hosts": [] 00:28:04.904 }, 00:28:04.904 { 00:28:04.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.904 "subtype": "NVMe", 00:28:04.904 "listen_addresses": [ 00:28:04.904 { 00:28:04.904 "trtype": "TCP", 00:28:04.904 "adrfam": "IPv4", 00:28:04.904 "traddr": "10.0.0.2", 00:28:04.904 "trsvcid": "4420" 00:28:04.904 } 00:28:04.904 ], 00:28:04.904 "allow_any_host": true, 00:28:04.904 "hosts": [], 00:28:04.904 "serial_number": "SPDK00000000000001", 00:28:04.904 "model_number": "SPDK bdev Controller", 00:28:04.904 "max_namespaces": 2, 00:28:04.904 "min_cntlid": 1, 00:28:04.904 "max_cntlid": 65519, 00:28:04.904 "namespaces": [ 00:28:04.904 { 00:28:04.904 "nsid": 1, 00:28:04.904 "bdev_name": "Malloc0", 00:28:04.904 "name": "Malloc0", 00:28:04.904 "nguid": "81A1BB26602A4040A0B2D7A5286323BE", 00:28:04.904 "uuid": "81a1bb26-602a-4040-a0b2-d7a5286323be" 00:28:04.904 } 00:28:04.904 ] 00:28:04.904 } 00:28:04.904 ] 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2882320 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:04.904 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:04.905 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:04.905 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:04.905 04:44:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:04.905 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.905 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:04.905 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:04.905 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:04.905 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.163 Malloc1 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.163 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.422 [ 00:28:05.422 { 00:28:05.422 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:05.422 "subtype": "Discovery", 00:28:05.422 "listen_addresses": [], 00:28:05.422 "allow_any_host": true, 00:28:05.422 "hosts": [] 00:28:05.422 }, 00:28:05.422 { 00:28:05.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.422 "subtype": "NVMe", 00:28:05.422 "listen_addresses": [ 00:28:05.422 { 00:28:05.422 "trtype": "TCP", 00:28:05.422 "adrfam": "IPv4", 00:28:05.422 "traddr": "10.0.0.2", 00:28:05.422 "trsvcid": "4420" 00:28:05.422 } 00:28:05.422 ], 00:28:05.422 "allow_any_host": true, 00:28:05.422 "hosts": [], 00:28:05.422 "serial_number": "SPDK00000000000001", 00:28:05.422 "model_number": "SPDK bdev Controller", 00:28:05.422 "max_namespaces": 2, 00:28:05.422 "min_cntlid": 1, 00:28:05.422 "max_cntlid": 65519, 00:28:05.422 "namespaces": [ 00:28:05.422 { 00:28:05.422 "nsid": 1, 00:28:05.422 "bdev_name": "Malloc0", 00:28:05.422 "name": "Malloc0", 00:28:05.422 "nguid": "81A1BB26602A4040A0B2D7A5286323BE", 00:28:05.422 "uuid": "81a1bb26-602a-4040-a0b2-d7a5286323be" 00:28:05.422 }, 00:28:05.422 { 00:28:05.422 "nsid": 2, 00:28:05.422 "bdev_name": "Malloc1", 00:28:05.422 "name": "Malloc1", 00:28:05.422 "nguid": "2A51FD01F0FB452E95CC97128E8B1032", 00:28:05.422 "uuid": "2a51fd01-f0fb-452e-95cc-97128e8b1032" 00:28:05.422 } 00:28:05.422 ] 00:28:05.422 } 00:28:05.422 ] 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2882320 00:28:05.422 Asynchronous Event Request test 00:28:05.422 Attaching to 10.0.0.2 00:28:05.422 Attached to 10.0.0.2 00:28:05.422 Registering asynchronous event callbacks... 00:28:05.422 Starting namespace attribute notice tests for all controllers... 00:28:05.422 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:05.422 aer_cb - Changed Namespace 00:28:05.422 Cleaning up... 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:05.422 04:44:25 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.423 rmmod nvme_tcp 00:28:05.423 rmmod nvme_fabrics 00:28:05.423 rmmod nvme_keyring 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2882196 ']' 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2882196 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 2882196 ']' 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 2882196 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2882196 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2882196' 00:28:05.423 killing process with pid 2882196 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 2882196 00:28:05.423 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 2882196 00:28:05.687 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.687 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.687 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.687 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.687 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.687 04:44:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.687 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.687 04:44:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.216 04:44:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.216 00:28:08.216 real 0m5.383s 00:28:08.216 user 0m4.601s 00:28:08.216 sys 0m1.918s 00:28:08.216 04:44:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:08.216 04:44:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.216 ************************************ 00:28:08.216 END TEST nvmf_aer 00:28:08.216 ************************************ 00:28:08.216 04:44:27 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:08.216 04:44:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:08.216 04:44:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:08.216 04:44:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.216 ************************************ 00:28:08.216 START TEST nvmf_async_init 00:28:08.216 ************************************ 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:08.216 * Looking for test storage... 00:28:08.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.216 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=75e5fa04df4f49e6aa0695e1e96f94e5 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.217 04:44:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.590 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.590 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.590 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.590 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.590 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.590 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.591 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.591 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.591 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:28:09.849 00:28:09.849 --- 10.0.0.2 ping statistics --- 00:28:09.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.849 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:28:09.849 00:28:09.849 --- 10.0.0.1 ping statistics --- 00:28:09.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.849 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2884283 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:09.849 04:44:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2884283 00:28:09.850 04:44:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 2884283 ']' 00:28:09.850 04:44:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.850 04:44:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:09.850 04:44:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.850 04:44:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:09.850 04:44:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.850 [2024-07-14 04:44:29.929967] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:09.850 [2024-07-14 04:44:29.930044] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.850 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.850 [2024-07-14 04:44:29.992383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.108 [2024-07-14 04:44:30.083692] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.108 [2024-07-14 04:44:30.083747] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.108 [2024-07-14 04:44:30.083775] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.108 [2024-07-14 04:44:30.083787] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.108 [2024-07-14 04:44:30.083797] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.108 [2024-07-14 04:44:30.083823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.108 [2024-07-14 04:44:30.230826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.108 null0 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 75e5fa04df4f49e6aa0695e1e96f94e5 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.108 [2024-07-14 04:44:30.271095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.108 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.366 nvme0n1 00:28:10.366 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.366 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:10.366 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.366 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.366 [ 00:28:10.366 { 00:28:10.366 "name": "nvme0n1", 00:28:10.366 "aliases": [ 00:28:10.366 "75e5fa04-df4f-49e6-aa06-95e1e96f94e5" 00:28:10.366 ], 00:28:10.366 "product_name": "NVMe disk", 00:28:10.366 "block_size": 512, 00:28:10.366 "num_blocks": 2097152, 00:28:10.366 "uuid": "75e5fa04-df4f-49e6-aa06-95e1e96f94e5", 00:28:10.366 "assigned_rate_limits": { 00:28:10.366 "rw_ios_per_sec": 0, 00:28:10.366 "rw_mbytes_per_sec": 0, 00:28:10.366 "r_mbytes_per_sec": 0, 00:28:10.366 "w_mbytes_per_sec": 0 00:28:10.366 }, 00:28:10.366 "claimed": false, 00:28:10.366 "zoned": false, 00:28:10.366 "supported_io_types": { 00:28:10.366 "read": true, 00:28:10.366 "write": true, 00:28:10.366 "unmap": false, 00:28:10.366 "write_zeroes": true, 00:28:10.366 "flush": true, 00:28:10.366 "reset": true, 00:28:10.366 "compare": true, 00:28:10.366 "compare_and_write": true, 00:28:10.366 "abort": true, 00:28:10.366 "nvme_admin": true, 00:28:10.366 "nvme_io": true 00:28:10.366 }, 00:28:10.366 "memory_domains": [ 00:28:10.366 { 00:28:10.366 "dma_device_id": "system", 00:28:10.366 "dma_device_type": 1 00:28:10.366 } 00:28:10.366 ], 00:28:10.366 "driver_specific": { 00:28:10.366 "nvme": [ 00:28:10.366 { 00:28:10.366 "trid": { 00:28:10.366 "trtype": "TCP", 00:28:10.366 "adrfam": "IPv4", 00:28:10.366 "traddr": "10.0.0.2", 00:28:10.366 "trsvcid": "4420", 00:28:10.366 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:10.366 }, 00:28:10.366 "ctrlr_data": { 00:28:10.366 "cntlid": 1, 00:28:10.366 "vendor_id": "0x8086", 00:28:10.366 "model_number": "SPDK bdev Controller", 00:28:10.366 "serial_number": "00000000000000000000", 00:28:10.366 "firmware_revision": "24.05.1", 00:28:10.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.366 "oacs": { 00:28:10.366 "security": 0, 00:28:10.366 "format": 0, 00:28:10.366 "firmware": 0, 00:28:10.366 "ns_manage": 0 00:28:10.366 }, 00:28:10.366 "multi_ctrlr": true, 00:28:10.366 "ana_reporting": false 00:28:10.366 }, 00:28:10.366 "vs": { 00:28:10.366 "nvme_version": "1.3" 00:28:10.366 }, 00:28:10.366 "ns_data": { 00:28:10.366 "id": 1, 00:28:10.366 "can_share": true 00:28:10.366 } 00:28:10.366 } 00:28:10.366 ], 00:28:10.366 "mp_policy": "active_passive" 00:28:10.366 } 00:28:10.366 } 00:28:10.366 ] 00:28:10.366 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.366 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:10.366 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.366 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.366 [2024-07-14 04:44:30.519569] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:10.366 [2024-07-14 04:44:30.519671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a8b90 (9): Bad file descriptor 00:28:10.624 [2024-07-14 04:44:30.653999] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:10.624 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.624 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:10.624 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.624 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.624 [ 00:28:10.624 { 00:28:10.624 "name": "nvme0n1", 00:28:10.624 "aliases": [ 00:28:10.624 "75e5fa04-df4f-49e6-aa06-95e1e96f94e5" 00:28:10.624 ], 00:28:10.624 "product_name": "NVMe disk", 00:28:10.624 "block_size": 512, 00:28:10.624 "num_blocks": 2097152, 00:28:10.624 "uuid": "75e5fa04-df4f-49e6-aa06-95e1e96f94e5", 00:28:10.624 "assigned_rate_limits": { 00:28:10.624 "rw_ios_per_sec": 0, 00:28:10.624 "rw_mbytes_per_sec": 0, 00:28:10.624 "r_mbytes_per_sec": 0, 00:28:10.624 "w_mbytes_per_sec": 0 00:28:10.624 }, 00:28:10.624 "claimed": false, 00:28:10.624 "zoned": false, 00:28:10.624 "supported_io_types": { 00:28:10.624 "read": true, 00:28:10.624 "write": true, 00:28:10.624 "unmap": false, 00:28:10.624 "write_zeroes": true, 00:28:10.624 "flush": true, 00:28:10.624 "reset": true, 00:28:10.624 "compare": true, 00:28:10.624 "compare_and_write": true, 00:28:10.624 "abort": true, 00:28:10.624 "nvme_admin": true, 00:28:10.624 "nvme_io": true 00:28:10.624 }, 00:28:10.624 "memory_domains": [ 00:28:10.624 { 00:28:10.624 "dma_device_id": "system", 00:28:10.624 "dma_device_type": 1 00:28:10.624 } 00:28:10.624 ], 00:28:10.624 "driver_specific": { 00:28:10.624 "nvme": [ 00:28:10.624 { 00:28:10.624 "trid": { 00:28:10.624 "trtype": "TCP", 00:28:10.624 "adrfam": "IPv4", 00:28:10.624 "traddr": "10.0.0.2", 00:28:10.624 "trsvcid": "4420", 00:28:10.624 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:10.624 }, 00:28:10.624 "ctrlr_data": { 00:28:10.624 "cntlid": 2, 00:28:10.624 "vendor_id": "0x8086", 00:28:10.624 "model_number": "SPDK bdev Controller", 00:28:10.624 "serial_number": "00000000000000000000", 00:28:10.624 "firmware_revision": "24.05.1", 00:28:10.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.624 "oacs": { 00:28:10.624 "security": 0, 00:28:10.624 "format": 0, 00:28:10.624 "firmware": 0, 00:28:10.624 "ns_manage": 0 00:28:10.624 }, 00:28:10.624 "multi_ctrlr": true, 00:28:10.624 "ana_reporting": false 00:28:10.624 }, 00:28:10.624 "vs": { 00:28:10.624 "nvme_version": "1.3" 00:28:10.624 }, 00:28:10.624 "ns_data": { 00:28:10.624 "id": 1, 00:28:10.624 "can_share": true 00:28:10.624 } 00:28:10.624 } 00:28:10.624 ], 00:28:10.624 "mp_policy": "active_passive" 00:28:10.624 } 00:28:10.624 } 00:28:10.624 ] 00:28:10.624 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.624 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.624 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.624 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Il5H7C9VdZ 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Il5H7C9VdZ 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.625 [2024-07-14 04:44:30.704206] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:10.625 [2024-07-14 04:44:30.704373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Il5H7C9VdZ 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.625 [2024-07-14 04:44:30.712219] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Il5H7C9VdZ 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.625 [2024-07-14 04:44:30.720231] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:10.625 [2024-07-14 04:44:30.720304] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:10.625 nvme0n1 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.625 [ 00:28:10.625 { 00:28:10.625 "name": "nvme0n1", 00:28:10.625 "aliases": [ 00:28:10.625 "75e5fa04-df4f-49e6-aa06-95e1e96f94e5" 00:28:10.625 ], 00:28:10.625 "product_name": "NVMe disk", 00:28:10.625 "block_size": 512, 00:28:10.625 "num_blocks": 2097152, 00:28:10.625 "uuid": "75e5fa04-df4f-49e6-aa06-95e1e96f94e5", 00:28:10.625 "assigned_rate_limits": { 00:28:10.625 "rw_ios_per_sec": 0, 00:28:10.625 "rw_mbytes_per_sec": 0, 00:28:10.625 "r_mbytes_per_sec": 0, 00:28:10.625 "w_mbytes_per_sec": 0 00:28:10.625 }, 00:28:10.625 "claimed": false, 00:28:10.625 "zoned": false, 00:28:10.625 "supported_io_types": { 00:28:10.625 "read": true, 00:28:10.625 "write": true, 00:28:10.625 "unmap": false, 00:28:10.625 "write_zeroes": true, 00:28:10.625 "flush": true, 00:28:10.625 "reset": true, 00:28:10.625 "compare": true, 00:28:10.625 "compare_and_write": true, 00:28:10.625 "abort": true, 00:28:10.625 "nvme_admin": true, 00:28:10.625 "nvme_io": true 00:28:10.625 }, 00:28:10.625 "memory_domains": [ 00:28:10.625 { 00:28:10.625 "dma_device_id": "system", 00:28:10.625 "dma_device_type": 1 00:28:10.625 } 00:28:10.625 ], 00:28:10.625 "driver_specific": { 00:28:10.625 "nvme": [ 00:28:10.625 { 00:28:10.625 "trid": { 00:28:10.625 "trtype": "TCP", 00:28:10.625 "adrfam": "IPv4", 00:28:10.625 "traddr": "10.0.0.2", 00:28:10.625 "trsvcid": "4421", 00:28:10.625 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:10.625 }, 00:28:10.625 "ctrlr_data": { 00:28:10.625 "cntlid": 3, 00:28:10.625 "vendor_id": "0x8086", 00:28:10.625 "model_number": "SPDK bdev Controller", 00:28:10.625 "serial_number": "00000000000000000000", 00:28:10.625 "firmware_revision": "24.05.1", 00:28:10.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.625 "oacs": { 00:28:10.625 "security": 0, 00:28:10.625 "format": 0, 00:28:10.625 "firmware": 0, 00:28:10.625 "ns_manage": 0 00:28:10.625 }, 00:28:10.625 "multi_ctrlr": true, 00:28:10.625 "ana_reporting": false 00:28:10.625 }, 00:28:10.625 "vs": { 00:28:10.625 "nvme_version": "1.3" 00:28:10.625 }, 00:28:10.625 "ns_data": { 00:28:10.625 "id": 1, 00:28:10.625 "can_share": true 00:28:10.625 } 00:28:10.625 } 00:28:10.625 ], 00:28:10.625 "mp_policy": "active_passive" 00:28:10.625 } 00:28:10.625 } 00:28:10.625 ] 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.625 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Il5H7C9VdZ 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.884 rmmod nvme_tcp 00:28:10.884 rmmod nvme_fabrics 00:28:10.884 rmmod nvme_keyring 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2884283 ']' 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2884283 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 2884283 ']' 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 2884283 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2884283 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2884283' 00:28:10.884 killing process with pid 2884283 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 2884283 00:28:10.884 [2024-07-14 04:44:30.918404] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:10.884 [2024-07-14 04:44:30.918445] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:10.884 04:44:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 2884283 00:28:11.144 04:44:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:11.144 04:44:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:11.144 04:44:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:11.144 04:44:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.144 04:44:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:11.144 04:44:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.144 04:44:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.144 04:44:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.080 04:44:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:13.080 00:28:13.080 real 0m5.314s 00:28:13.080 user 0m2.004s 00:28:13.080 sys 0m1.687s 00:28:13.080 04:44:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:13.080 04:44:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.080 ************************************ 00:28:13.080 END TEST nvmf_async_init 00:28:13.080 ************************************ 00:28:13.080 04:44:33 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:13.080 04:44:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:13.080 04:44:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:13.080 04:44:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.080 ************************************ 00:28:13.080 START TEST dma 00:28:13.080 ************************************ 00:28:13.080 04:44:33 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:13.350 * Looking for test storage... 00:28:13.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.350 04:44:33 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.350 04:44:33 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.350 04:44:33 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.350 04:44:33 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.350 04:44:33 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.350 04:44:33 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.350 04:44:33 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.350 04:44:33 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:13.350 04:44:33 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.350 04:44:33 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.350 04:44:33 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:13.350 04:44:33 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:13.350 00:28:13.350 real 0m0.070s 00:28:13.350 user 0m0.028s 00:28:13.350 sys 0m0.047s 00:28:13.350 04:44:33 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:13.350 04:44:33 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:13.350 ************************************ 00:28:13.350 END TEST dma 00:28:13.350 ************************************ 00:28:13.350 04:44:33 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:13.350 04:44:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:13.350 04:44:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:13.350 04:44:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.350 ************************************ 00:28:13.350 START TEST nvmf_identify 00:28:13.350 ************************************ 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:13.350 * Looking for test storage... 00:28:13.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:13.350 04:44:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:15.253 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:15.253 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:15.253 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.253 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:15.254 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:15.254 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.512 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.512 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.512 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:15.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:28:15.512 00:28:15.512 --- 10.0.0.2 ping statistics --- 00:28:15.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.512 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:28:15.512 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:28:15.512 00:28:15.512 --- 10.0.0.1 ping statistics --- 00:28:15.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.512 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2886400 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2886400 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 2886400 ']' 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:15.513 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.513 [2024-07-14 04:44:35.573601] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:15.513 [2024-07-14 04:44:35.573683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.513 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.513 [2024-07-14 04:44:35.641769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.771 [2024-07-14 04:44:35.734706] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.771 [2024-07-14 04:44:35.734764] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.771 [2024-07-14 04:44:35.734785] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.771 [2024-07-14 04:44:35.734799] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.771 [2024-07-14 04:44:35.734810] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.771 [2024-07-14 04:44:35.734904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.771 [2024-07-14 04:44:35.734949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.771 [2024-07-14 04:44:35.735031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.771 [2024-07-14 04:44:35.735034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.771 [2024-07-14 04:44:35.870498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.771 Malloc0 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.771 [2024-07-14 04:44:35.941727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.771 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.771 [ 00:28:15.771 { 00:28:15.771 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:15.771 "subtype": "Discovery", 00:28:15.771 "listen_addresses": [ 00:28:15.771 { 00:28:15.771 "trtype": "TCP", 00:28:15.771 "adrfam": "IPv4", 00:28:15.771 "traddr": "10.0.0.2", 00:28:15.771 "trsvcid": "4420" 00:28:15.771 } 00:28:15.771 ], 00:28:15.771 "allow_any_host": true, 00:28:15.771 "hosts": [] 00:28:15.771 }, 00:28:16.031 { 00:28:16.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.031 "subtype": "NVMe", 00:28:16.031 "listen_addresses": [ 00:28:16.031 { 00:28:16.031 "trtype": "TCP", 00:28:16.031 "adrfam": "IPv4", 00:28:16.031 "traddr": "10.0.0.2", 00:28:16.031 "trsvcid": "4420" 00:28:16.031 } 00:28:16.031 ], 00:28:16.031 "allow_any_host": true, 00:28:16.031 "hosts": [], 00:28:16.031 "serial_number": "SPDK00000000000001", 00:28:16.031 "model_number": "SPDK bdev Controller", 00:28:16.031 "max_namespaces": 32, 00:28:16.031 "min_cntlid": 1, 00:28:16.031 "max_cntlid": 65519, 00:28:16.031 "namespaces": [ 00:28:16.031 { 00:28:16.031 "nsid": 1, 00:28:16.031 "bdev_name": "Malloc0", 00:28:16.031 "name": "Malloc0", 00:28:16.031 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:16.031 "eui64": "ABCDEF0123456789", 00:28:16.031 "uuid": "7646b0fb-2378-4702-99b9-a614e3d776fd" 00:28:16.031 } 00:28:16.031 ] 00:28:16.031 } 00:28:16.031 ] 00:28:16.031 04:44:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.031 04:44:35 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:16.031 [2024-07-14 04:44:35.978414] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:16.031 [2024-07-14 04:44:35.978450] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886432 ] 00:28:16.031 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.031 [2024-07-14 04:44:36.010178] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:16.031 [2024-07-14 04:44:36.010234] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:16.031 [2024-07-14 04:44:36.010244] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:16.031 [2024-07-14 04:44:36.010263] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:16.031 [2024-07-14 04:44:36.010276] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:16.031 [2024-07-14 04:44:36.013921] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:16.031 [2024-07-14 04:44:36.013974] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf29120 0 00:28:16.031 [2024-07-14 04:44:36.020880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:16.031 [2024-07-14 04:44:36.020902] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:16.031 [2024-07-14 04:44:36.020912] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:16.031 [2024-07-14 04:44:36.020918] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:16.031 [2024-07-14 04:44:36.020969] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.031 [2024-07-14 04:44:36.020982] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.031 [2024-07-14 04:44:36.020990] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.031 [2024-07-14 04:44:36.021010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:16.031 [2024-07-14 04:44:36.021037] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.031 [2024-07-14 04:44:36.027878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.031 [2024-07-14 04:44:36.027897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.031 [2024-07-14 04:44:36.027905] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.027914] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf821f0) on tqpair=0xf29120 00:28:16.032 [2024-07-14 04:44:36.027930] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:16.032 [2024-07-14 04:44:36.027941] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:16.032 [2024-07-14 04:44:36.027950] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:16.032 [2024-07-14 04:44:36.027979] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.027988] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.027995] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.032 [2024-07-14 04:44:36.028006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.032 [2024-07-14 04:44:36.028030] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.032 [2024-07-14 04:44:36.028249] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.032 [2024-07-14 04:44:36.028264] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.032 [2024-07-14 04:44:36.028272] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.028278] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf821f0) on tqpair=0xf29120 00:28:16.032 [2024-07-14 04:44:36.028292] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:16.032 [2024-07-14 04:44:36.028306] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:16.032 [2024-07-14 04:44:36.028319] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.028326] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.028346] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.032 [2024-07-14 04:44:36.028357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.032 [2024-07-14 04:44:36.028378] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.032 [2024-07-14 04:44:36.028546] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.032 [2024-07-14 04:44:36.028561] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.032 [2024-07-14 04:44:36.028568] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.028575] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf821f0) on tqpair=0xf29120 00:28:16.032 [2024-07-14 04:44:36.028584] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:16.032 [2024-07-14 04:44:36.028598] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:16.032 [2024-07-14 04:44:36.028610] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.028618] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.028624] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.032 [2024-07-14 04:44:36.028634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.032 [2024-07-14 04:44:36.028655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.032 [2024-07-14 04:44:36.028830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.032 [2024-07-14 04:44:36.028842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.032 [2024-07-14 04:44:36.028864] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.028878] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf821f0) on tqpair=0xf29120 00:28:16.032 [2024-07-14 04:44:36.028887] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:16.032 [2024-07-14 04:44:36.028905] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.028914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.028924] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.032 [2024-07-14 04:44:36.028936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.032 [2024-07-14 04:44:36.028957] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.032 [2024-07-14 04:44:36.029119] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.032 [2024-07-14 04:44:36.029135] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.032 [2024-07-14 04:44:36.029142] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.029148] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf821f0) on tqpair=0xf29120 00:28:16.032 [2024-07-14 04:44:36.029158] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:16.032 [2024-07-14 04:44:36.029181] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:16.032 [2024-07-14 04:44:36.029194] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:16.032 [2024-07-14 04:44:36.029304] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:16.032 [2024-07-14 04:44:36.029313] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:16.032 [2024-07-14 04:44:36.029328] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.029335] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.029341] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.032 [2024-07-14 04:44:36.029351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.032 [2024-07-14 04:44:36.029371] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.032 [2024-07-14 04:44:36.029545] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.032 [2024-07-14 04:44:36.029560] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.032 [2024-07-14 04:44:36.029567] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.029574] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf821f0) on tqpair=0xf29120 00:28:16.032 [2024-07-14 04:44:36.029583] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:16.032 [2024-07-14 04:44:36.029599] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.029608] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.029614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.032 [2024-07-14 04:44:36.029624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.032 [2024-07-14 04:44:36.029644] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.032 [2024-07-14 04:44:36.029794] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.032 [2024-07-14 04:44:36.029809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.032 [2024-07-14 04:44:36.029816] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.029822] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf821f0) on tqpair=0xf29120 00:28:16.032 [2024-07-14 04:44:36.029830] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:16.032 [2024-07-14 04:44:36.029842] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:16.032 [2024-07-14 04:44:36.029885] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:16.032 [2024-07-14 04:44:36.029901] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:16.032 [2024-07-14 04:44:36.029919] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.029928] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.032 [2024-07-14 04:44:36.029939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.032 [2024-07-14 04:44:36.029961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.032 [2024-07-14 04:44:36.030155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.032 [2024-07-14 04:44:36.030171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.032 [2024-07-14 04:44:36.030178] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.030201] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29120): datao=0, datal=4096, cccid=0 00:28:16.032 [2024-07-14 04:44:36.030208] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf821f0) on tqpair(0xf29120): expected_datao=0, payload_size=4096 00:28:16.032 [2024-07-14 04:44:36.030217] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.030228] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.030236] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.030306] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.032 [2024-07-14 04:44:36.030317] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.032 [2024-07-14 04:44:36.030324] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.030331] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf821f0) on tqpair=0xf29120 00:28:16.032 [2024-07-14 04:44:36.030347] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:16.032 [2024-07-14 04:44:36.030357] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:16.032 [2024-07-14 04:44:36.030365] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:16.032 [2024-07-14 04:44:36.030373] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:16.032 [2024-07-14 04:44:36.030381] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:16.032 [2024-07-14 04:44:36.030389] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:16.032 [2024-07-14 04:44:36.030404] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:16.032 [2024-07-14 04:44:36.030416] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.030438] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.032 [2024-07-14 04:44:36.030444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.032 [2024-07-14 04:44:36.030455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:16.032 [2024-07-14 04:44:36.030475] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.032 [2024-07-14 04:44:36.030656] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.033 [2024-07-14 04:44:36.030675] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.033 [2024-07-14 04:44:36.030683] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030690] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf821f0) on tqpair=0xf29120 00:28:16.033 [2024-07-14 04:44:36.030702] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030715] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.030725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.033 [2024-07-14 04:44:36.030735] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030748] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.030756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.033 [2024-07-14 04:44:36.030765] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030772] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030778] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.030801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.033 [2024-07-14 04:44:36.030811] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030817] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.030831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.033 [2024-07-14 04:44:36.030839] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:16.033 [2024-07-14 04:44:36.030879] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:16.033 [2024-07-14 04:44:36.030894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.030902] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.030912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.033 [2024-07-14 04:44:36.030934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf821f0, cid 0, qid 0 00:28:16.033 [2024-07-14 04:44:36.030960] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82350, cid 1, qid 0 00:28:16.033 [2024-07-14 04:44:36.030968] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf824b0, cid 2, qid 0 00:28:16.033 [2024-07-14 04:44:36.030976] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82610, cid 3, qid 0 00:28:16.033 [2024-07-14 04:44:36.030983] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82770, cid 4, qid 0 00:28:16.033 [2024-07-14 04:44:36.031214] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.033 [2024-07-14 04:44:36.031227] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.033 [2024-07-14 04:44:36.031234] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.031255] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf82770) on tqpair=0xf29120 00:28:16.033 [2024-07-14 04:44:36.031265] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:16.033 [2024-07-14 04:44:36.031277] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:16.033 [2024-07-14 04:44:36.031295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.031303] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.031313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.033 [2024-07-14 04:44:36.031332] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82770, cid 4, qid 0 00:28:16.033 [2024-07-14 04:44:36.031543] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.033 [2024-07-14 04:44:36.031555] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.033 [2024-07-14 04:44:36.031562] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.031568] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29120): datao=0, datal=4096, cccid=4 00:28:16.033 [2024-07-14 04:44:36.031576] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf82770) on tqpair(0xf29120): expected_datao=0, payload_size=4096 00:28:16.033 [2024-07-14 04:44:36.031583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.031634] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.031643] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.074880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.033 [2024-07-14 04:44:36.074899] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.033 [2024-07-14 04:44:36.074907] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.074915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf82770) on tqpair=0xf29120 00:28:16.033 [2024-07-14 04:44:36.074935] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:16.033 [2024-07-14 04:44:36.074972] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.074982] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.074994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.033 [2024-07-14 04:44:36.075006] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.075014] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.075020] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.075029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.033 [2024-07-14 04:44:36.075057] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82770, cid 4, qid 0 00:28:16.033 [2024-07-14 04:44:36.075085] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf828d0, cid 5, qid 0 00:28:16.033 [2024-07-14 04:44:36.075309] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.033 [2024-07-14 04:44:36.075325] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.033 [2024-07-14 04:44:36.075332] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.075338] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29120): datao=0, datal=1024, cccid=4 00:28:16.033 [2024-07-14 04:44:36.075346] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf82770) on tqpair(0xf29120): expected_datao=0, payload_size=1024 00:28:16.033 [2024-07-14 04:44:36.075354] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.075377] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.075385] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.075398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.033 [2024-07-14 04:44:36.075408] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.033 [2024-07-14 04:44:36.075415] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.075421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf828d0) on tqpair=0xf29120 00:28:16.033 [2024-07-14 04:44:36.121882] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.033 [2024-07-14 04:44:36.121901] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.033 [2024-07-14 04:44:36.121908] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.121915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf82770) on tqpair=0xf29120 00:28:16.033 [2024-07-14 04:44:36.121938] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.121948] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.121960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.033 [2024-07-14 04:44:36.121989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82770, cid 4, qid 0 00:28:16.033 [2024-07-14 04:44:36.122223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.033 [2024-07-14 04:44:36.122238] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.033 [2024-07-14 04:44:36.122245] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.122251] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29120): datao=0, datal=3072, cccid=4 00:28:16.033 [2024-07-14 04:44:36.122258] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf82770) on tqpair(0xf29120): expected_datao=0, payload_size=3072 00:28:16.033 [2024-07-14 04:44:36.122265] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.122276] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.122283] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.122369] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.033 [2024-07-14 04:44:36.122380] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.033 [2024-07-14 04:44:36.122387] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.122394] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf82770) on tqpair=0xf29120 00:28:16.033 [2024-07-14 04:44:36.122408] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.122417] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29120) 00:28:16.033 [2024-07-14 04:44:36.122427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.033 [2024-07-14 04:44:36.122454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82770, cid 4, qid 0 00:28:16.033 [2024-07-14 04:44:36.122630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.033 [2024-07-14 04:44:36.122645] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.033 [2024-07-14 04:44:36.122653] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.122659] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29120): datao=0, datal=8, cccid=4 00:28:16.033 [2024-07-14 04:44:36.122666] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf82770) on tqpair(0xf29120): expected_datao=0, payload_size=8 00:28:16.033 [2024-07-14 04:44:36.122674] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.122683] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.122690] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.163042] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.033 [2024-07-14 04:44:36.163068] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.033 [2024-07-14 04:44:36.163077] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.033 [2024-07-14 04:44:36.163084] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf82770) on tqpair=0xf29120 00:28:16.033 ===================================================== 00:28:16.033 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:16.034 ===================================================== 00:28:16.034 Controller Capabilities/Features 00:28:16.034 ================================ 00:28:16.034 Vendor ID: 0000 00:28:16.034 Subsystem Vendor ID: 0000 00:28:16.034 Serial Number: .................... 00:28:16.034 Model Number: ........................................ 00:28:16.034 Firmware Version: 24.05.1 00:28:16.034 Recommended Arb Burst: 0 00:28:16.034 IEEE OUI Identifier: 00 00 00 00:28:16.034 Multi-path I/O 00:28:16.034 May have multiple subsystem ports: No 00:28:16.034 May have multiple controllers: No 00:28:16.034 Associated with SR-IOV VF: No 00:28:16.034 Max Data Transfer Size: 131072 00:28:16.034 Max Number of Namespaces: 0 00:28:16.034 Max Number of I/O Queues: 1024 00:28:16.034 NVMe Specification Version (VS): 1.3 00:28:16.034 NVMe Specification Version (Identify): 1.3 00:28:16.034 Maximum Queue Entries: 128 00:28:16.034 Contiguous Queues Required: Yes 00:28:16.034 Arbitration Mechanisms Supported 00:28:16.034 Weighted Round Robin: Not Supported 00:28:16.034 Vendor Specific: Not Supported 00:28:16.034 Reset Timeout: 15000 ms 00:28:16.034 Doorbell Stride: 4 bytes 00:28:16.034 NVM Subsystem Reset: Not Supported 00:28:16.034 Command Sets Supported 00:28:16.034 NVM Command Set: Supported 00:28:16.034 Boot Partition: Not Supported 00:28:16.034 Memory Page Size Minimum: 4096 bytes 00:28:16.034 Memory Page Size Maximum: 4096 bytes 00:28:16.034 Persistent Memory Region: Not Supported 00:28:16.034 Optional Asynchronous Events Supported 00:28:16.034 Namespace Attribute Notices: Not Supported 00:28:16.034 Firmware Activation Notices: Not Supported 00:28:16.034 ANA Change Notices: Not Supported 00:28:16.034 PLE Aggregate Log Change Notices: Not Supported 00:28:16.034 LBA Status Info Alert Notices: Not Supported 00:28:16.034 EGE Aggregate Log Change Notices: Not Supported 00:28:16.034 Normal NVM Subsystem Shutdown event: Not Supported 00:28:16.034 Zone Descriptor Change Notices: Not Supported 00:28:16.034 Discovery Log Change Notices: Supported 00:28:16.034 Controller Attributes 00:28:16.034 128-bit Host Identifier: Not Supported 00:28:16.034 Non-Operational Permissive Mode: Not Supported 00:28:16.034 NVM Sets: Not Supported 00:28:16.034 Read Recovery Levels: Not Supported 00:28:16.034 Endurance Groups: Not Supported 00:28:16.034 Predictable Latency Mode: Not Supported 00:28:16.034 Traffic Based Keep ALive: Not Supported 00:28:16.034 Namespace Granularity: Not Supported 00:28:16.034 SQ Associations: Not Supported 00:28:16.034 UUID List: Not Supported 00:28:16.034 Multi-Domain Subsystem: Not Supported 00:28:16.034 Fixed Capacity Management: Not Supported 00:28:16.034 Variable Capacity Management: Not Supported 00:28:16.034 Delete Endurance Group: Not Supported 00:28:16.034 Delete NVM Set: Not Supported 00:28:16.034 Extended LBA Formats Supported: Not Supported 00:28:16.034 Flexible Data Placement Supported: Not Supported 00:28:16.034 00:28:16.034 Controller Memory Buffer Support 00:28:16.034 ================================ 00:28:16.034 Supported: No 00:28:16.034 00:28:16.034 Persistent Memory Region Support 00:28:16.034 ================================ 00:28:16.034 Supported: No 00:28:16.034 00:28:16.034 Admin Command Set Attributes 00:28:16.034 ============================ 00:28:16.034 Security Send/Receive: Not Supported 00:28:16.034 Format NVM: Not Supported 00:28:16.034 Firmware Activate/Download: Not Supported 00:28:16.034 Namespace Management: Not Supported 00:28:16.034 Device Self-Test: Not Supported 00:28:16.034 Directives: Not Supported 00:28:16.034 NVMe-MI: Not Supported 00:28:16.034 Virtualization Management: Not Supported 00:28:16.034 Doorbell Buffer Config: Not Supported 00:28:16.034 Get LBA Status Capability: Not Supported 00:28:16.034 Command & Feature Lockdown Capability: Not Supported 00:28:16.034 Abort Command Limit: 1 00:28:16.034 Async Event Request Limit: 4 00:28:16.034 Number of Firmware Slots: N/A 00:28:16.034 Firmware Slot 1 Read-Only: N/A 00:28:16.034 Firmware Activation Without Reset: N/A 00:28:16.034 Multiple Update Detection Support: N/A 00:28:16.034 Firmware Update Granularity: No Information Provided 00:28:16.034 Per-Namespace SMART Log: No 00:28:16.034 Asymmetric Namespace Access Log Page: Not Supported 00:28:16.034 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:16.034 Command Effects Log Page: Not Supported 00:28:16.034 Get Log Page Extended Data: Supported 00:28:16.034 Telemetry Log Pages: Not Supported 00:28:16.034 Persistent Event Log Pages: Not Supported 00:28:16.034 Supported Log Pages Log Page: May Support 00:28:16.034 Commands Supported & Effects Log Page: Not Supported 00:28:16.034 Feature Identifiers & Effects Log Page:May Support 00:28:16.034 NVMe-MI Commands & Effects Log Page: May Support 00:28:16.034 Data Area 4 for Telemetry Log: Not Supported 00:28:16.034 Error Log Page Entries Supported: 128 00:28:16.034 Keep Alive: Not Supported 00:28:16.034 00:28:16.034 NVM Command Set Attributes 00:28:16.034 ========================== 00:28:16.034 Submission Queue Entry Size 00:28:16.034 Max: 1 00:28:16.034 Min: 1 00:28:16.034 Completion Queue Entry Size 00:28:16.034 Max: 1 00:28:16.034 Min: 1 00:28:16.034 Number of Namespaces: 0 00:28:16.034 Compare Command: Not Supported 00:28:16.034 Write Uncorrectable Command: Not Supported 00:28:16.034 Dataset Management Command: Not Supported 00:28:16.034 Write Zeroes Command: Not Supported 00:28:16.034 Set Features Save Field: Not Supported 00:28:16.034 Reservations: Not Supported 00:28:16.034 Timestamp: Not Supported 00:28:16.034 Copy: Not Supported 00:28:16.034 Volatile Write Cache: Not Present 00:28:16.034 Atomic Write Unit (Normal): 1 00:28:16.034 Atomic Write Unit (PFail): 1 00:28:16.034 Atomic Compare & Write Unit: 1 00:28:16.034 Fused Compare & Write: Supported 00:28:16.034 Scatter-Gather List 00:28:16.034 SGL Command Set: Supported 00:28:16.034 SGL Keyed: Supported 00:28:16.034 SGL Bit Bucket Descriptor: Not Supported 00:28:16.034 SGL Metadata Pointer: Not Supported 00:28:16.034 Oversized SGL: Not Supported 00:28:16.034 SGL Metadata Address: Not Supported 00:28:16.034 SGL Offset: Supported 00:28:16.034 Transport SGL Data Block: Not Supported 00:28:16.034 Replay Protected Memory Block: Not Supported 00:28:16.034 00:28:16.034 Firmware Slot Information 00:28:16.034 ========================= 00:28:16.034 Active slot: 0 00:28:16.034 00:28:16.034 00:28:16.034 Error Log 00:28:16.034 ========= 00:28:16.034 00:28:16.034 Active Namespaces 00:28:16.034 ================= 00:28:16.034 Discovery Log Page 00:28:16.034 ================== 00:28:16.034 Generation Counter: 2 00:28:16.034 Number of Records: 2 00:28:16.034 Record Format: 0 00:28:16.034 00:28:16.034 Discovery Log Entry 0 00:28:16.034 ---------------------- 00:28:16.034 Transport Type: 3 (TCP) 00:28:16.034 Address Family: 1 (IPv4) 00:28:16.034 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:16.034 Entry Flags: 00:28:16.034 Duplicate Returned Information: 1 00:28:16.034 Explicit Persistent Connection Support for Discovery: 1 00:28:16.034 Transport Requirements: 00:28:16.034 Secure Channel: Not Required 00:28:16.034 Port ID: 0 (0x0000) 00:28:16.034 Controller ID: 65535 (0xffff) 00:28:16.034 Admin Max SQ Size: 128 00:28:16.034 Transport Service Identifier: 4420 00:28:16.034 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:16.034 Transport Address: 10.0.0.2 00:28:16.034 Discovery Log Entry 1 00:28:16.034 ---------------------- 00:28:16.034 Transport Type: 3 (TCP) 00:28:16.034 Address Family: 1 (IPv4) 00:28:16.034 Subsystem Type: 2 (NVM Subsystem) 00:28:16.034 Entry Flags: 00:28:16.034 Duplicate Returned Information: 0 00:28:16.034 Explicit Persistent Connection Support for Discovery: 0 00:28:16.034 Transport Requirements: 00:28:16.034 Secure Channel: Not Required 00:28:16.034 Port ID: 0 (0x0000) 00:28:16.034 Controller ID: 65535 (0xffff) 00:28:16.034 Admin Max SQ Size: 128 00:28:16.034 Transport Service Identifier: 4420 00:28:16.034 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:16.034 Transport Address: 10.0.0.2 [2024-07-14 04:44:36.163205] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:16.034 [2024-07-14 04:44:36.163246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.034 [2024-07-14 04:44:36.163258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.034 [2024-07-14 04:44:36.163268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.034 [2024-07-14 04:44:36.163277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.034 [2024-07-14 04:44:36.163295] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.034 [2024-07-14 04:44:36.163304] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.034 [2024-07-14 04:44:36.163311] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29120) 00:28:16.034 [2024-07-14 04:44:36.163322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.034 [2024-07-14 04:44:36.163360] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82610, cid 3, qid 0 00:28:16.035 [2024-07-14 04:44:36.163525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.035 [2024-07-14 04:44:36.163540] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.035 [2024-07-14 04:44:36.163548] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.035 [2024-07-14 04:44:36.163554] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf82610) on tqpair=0xf29120 00:28:16.035 [2024-07-14 04:44:36.163566] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.035 [2024-07-14 04:44:36.163574] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.035 [2024-07-14 04:44:36.163580] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29120) 00:28:16.035 [2024-07-14 04:44:36.163591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.035 [2024-07-14 04:44:36.163617] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82610, cid 3, qid 0 00:28:16.035 [2024-07-14 04:44:36.163824] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.035 [2024-07-14 04:44:36.163836] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.035 [2024-07-14 04:44:36.163843] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.035 [2024-07-14 04:44:36.167870] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf82610) on tqpair=0xf29120 00:28:16.035 [2024-07-14 04:44:36.167888] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:16.035 [2024-07-14 04:44:36.167896] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:16.035 [2024-07-14 04:44:36.167915] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.035 [2024-07-14 04:44:36.167924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.035 [2024-07-14 04:44:36.167931] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29120) 00:28:16.035 [2024-07-14 04:44:36.167941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.035 [2024-07-14 04:44:36.167963] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf82610, cid 3, qid 0 00:28:16.035 [2024-07-14 04:44:36.168181] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.035 [2024-07-14 04:44:36.168198] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.035 [2024-07-14 04:44:36.168206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.035 [2024-07-14 04:44:36.168213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf82610) on tqpair=0xf29120 00:28:16.035 [2024-07-14 04:44:36.168227] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:28:16.035 00:28:16.035 04:44:36 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:16.035 [2024-07-14 04:44:36.198472] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:16.035 [2024-07-14 04:44:36.198513] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886440 ] 00:28:16.035 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.297 [2024-07-14 04:44:36.231755] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:16.297 [2024-07-14 04:44:36.231802] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:16.297 [2024-07-14 04:44:36.231812] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:16.297 [2024-07-14 04:44:36.231826] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:16.297 [2024-07-14 04:44:36.231837] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:16.297 [2024-07-14 04:44:36.232138] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:16.297 [2024-07-14 04:44:36.232182] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5cb120 0 00:28:16.297 [2024-07-14 04:44:36.245879] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:16.297 [2024-07-14 04:44:36.245907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:16.297 [2024-07-14 04:44:36.245915] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:16.297 [2024-07-14 04:44:36.245921] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:16.297 [2024-07-14 04:44:36.245972] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.297 [2024-07-14 04:44:36.245985] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.297 [2024-07-14 04:44:36.245992] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.297 [2024-07-14 04:44:36.246005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:16.297 [2024-07-14 04:44:36.246032] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.297 [2024-07-14 04:44:36.253877] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.297 [2024-07-14 04:44:36.253895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.297 [2024-07-14 04:44:36.253903] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.297 [2024-07-14 04:44:36.253909] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6241f0) on tqpair=0x5cb120 00:28:16.297 [2024-07-14 04:44:36.253923] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:16.297 [2024-07-14 04:44:36.253948] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:16.297 [2024-07-14 04:44:36.253958] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:16.297 [2024-07-14 04:44:36.253981] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.297 [2024-07-14 04:44:36.253991] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.297 [2024-07-14 04:44:36.253998] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.297 [2024-07-14 04:44:36.254010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.297 [2024-07-14 04:44:36.254034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.297 [2024-07-14 04:44:36.254219] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.297 [2024-07-14 04:44:36.254234] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.297 [2024-07-14 04:44:36.254242] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.297 [2024-07-14 04:44:36.254249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6241f0) on tqpair=0x5cb120 00:28:16.297 [2024-07-14 04:44:36.254261] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:16.297 [2024-07-14 04:44:36.254276] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:16.297 [2024-07-14 04:44:36.254289] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.297 [2024-07-14 04:44:36.254297] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.297 [2024-07-14 04:44:36.254303] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.297 [2024-07-14 04:44:36.254314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.297 [2024-07-14 04:44:36.254335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.297 [2024-07-14 04:44:36.254511] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.297 [2024-07-14 04:44:36.254526] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.298 [2024-07-14 04:44:36.254533] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.254540] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6241f0) on tqpair=0x5cb120 00:28:16.298 [2024-07-14 04:44:36.254549] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:16.298 [2024-07-14 04:44:36.254563] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:16.298 [2024-07-14 04:44:36.254576] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.254583] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.254590] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.298 [2024-07-14 04:44:36.254600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.298 [2024-07-14 04:44:36.254622] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.298 [2024-07-14 04:44:36.254789] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.298 [2024-07-14 04:44:36.254802] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.298 [2024-07-14 04:44:36.254809] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.254815] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6241f0) on tqpair=0x5cb120 00:28:16.298 [2024-07-14 04:44:36.254824] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:16.298 [2024-07-14 04:44:36.254841] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.254850] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.254856] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.298 [2024-07-14 04:44:36.254879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.298 [2024-07-14 04:44:36.254904] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.298 [2024-07-14 04:44:36.255080] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.298 [2024-07-14 04:44:36.255093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.298 [2024-07-14 04:44:36.255100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.255106] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6241f0) on tqpair=0x5cb120 00:28:16.298 [2024-07-14 04:44:36.255114] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:16.298 [2024-07-14 04:44:36.255123] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:16.298 [2024-07-14 04:44:36.255136] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:16.298 [2024-07-14 04:44:36.255246] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:16.298 [2024-07-14 04:44:36.255254] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:16.298 [2024-07-14 04:44:36.255266] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.255274] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.255280] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.298 [2024-07-14 04:44:36.255290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.298 [2024-07-14 04:44:36.255311] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.298 [2024-07-14 04:44:36.255492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.298 [2024-07-14 04:44:36.255505] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.298 [2024-07-14 04:44:36.255512] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.255518] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6241f0) on tqpair=0x5cb120 00:28:16.298 [2024-07-14 04:44:36.255527] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:16.298 [2024-07-14 04:44:36.255543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.255552] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.255559] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.298 [2024-07-14 04:44:36.255569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.298 [2024-07-14 04:44:36.255590] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.298 [2024-07-14 04:44:36.255751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.298 [2024-07-14 04:44:36.255763] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.298 [2024-07-14 04:44:36.255770] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.255777] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6241f0) on tqpair=0x5cb120 00:28:16.298 [2024-07-14 04:44:36.255784] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:16.298 [2024-07-14 04:44:36.255793] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:16.298 [2024-07-14 04:44:36.255806] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:16.298 [2024-07-14 04:44:36.255826] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:16.298 [2024-07-14 04:44:36.255843] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.255852] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.298 [2024-07-14 04:44:36.255862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.298 [2024-07-14 04:44:36.255892] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.298 [2024-07-14 04:44:36.256124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.298 [2024-07-14 04:44:36.256140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.298 [2024-07-14 04:44:36.256147] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.256154] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5cb120): datao=0, datal=4096, cccid=0 00:28:16.298 [2024-07-14 04:44:36.256162] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6241f0) on tqpair(0x5cb120): expected_datao=0, payload_size=4096 00:28:16.298 [2024-07-14 04:44:36.256169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.256194] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.256203] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.297876] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.298 [2024-07-14 04:44:36.297896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.298 [2024-07-14 04:44:36.297904] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.297911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6241f0) on tqpair=0x5cb120 00:28:16.298 [2024-07-14 04:44:36.297928] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:16.298 [2024-07-14 04:44:36.297938] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:16.298 [2024-07-14 04:44:36.297946] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:16.298 [2024-07-14 04:44:36.297953] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:16.298 [2024-07-14 04:44:36.297960] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:16.298 [2024-07-14 04:44:36.297969] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:16.298 [2024-07-14 04:44:36.297984] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:16.298 [2024-07-14 04:44:36.297997] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.298005] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.298011] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.298 [2024-07-14 04:44:36.298023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:16.298 [2024-07-14 04:44:36.298046] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.298 [2024-07-14 04:44:36.298225] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.298 [2024-07-14 04:44:36.298241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.298 [2024-07-14 04:44:36.298248] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.298255] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6241f0) on tqpair=0x5cb120 00:28:16.298 [2024-07-14 04:44:36.298269] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.298278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.298284] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5cb120) 00:28:16.298 [2024-07-14 04:44:36.298295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.298 [2024-07-14 04:44:36.298305] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.298 [2024-07-14 04:44:36.298312] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.298318] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5cb120) 00:28:16.299 [2024-07-14 04:44:36.298327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.299 [2024-07-14 04:44:36.298336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.298343] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.298349] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5cb120) 00:28:16.299 [2024-07-14 04:44:36.298374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.299 [2024-07-14 04:44:36.298384] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.298390] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.298396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5cb120) 00:28:16.299 [2024-07-14 04:44:36.298404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.299 [2024-07-14 04:44:36.298413] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.298432] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.298445] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.298452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5cb120) 00:28:16.299 [2024-07-14 04:44:36.298462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.299 [2024-07-14 04:44:36.298484] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6241f0, cid 0, qid 0 00:28:16.299 [2024-07-14 04:44:36.298511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624350, cid 1, qid 0 00:28:16.299 [2024-07-14 04:44:36.298519] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6244b0, cid 2, qid 0 00:28:16.299 [2024-07-14 04:44:36.298527] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624610, cid 3, qid 0 00:28:16.299 [2024-07-14 04:44:36.298534] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624770, cid 4, qid 0 00:28:16.299 [2024-07-14 04:44:36.298734] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.299 [2024-07-14 04:44:36.298749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.299 [2024-07-14 04:44:36.298757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.298764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624770) on tqpair=0x5cb120 00:28:16.299 [2024-07-14 04:44:36.298772] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:16.299 [2024-07-14 04:44:36.298781] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.298796] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.298827] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.298839] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.298846] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.298852] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5cb120) 00:28:16.299 [2024-07-14 04:44:36.298863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:16.299 [2024-07-14 04:44:36.298908] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624770, cid 4, qid 0 00:28:16.299 [2024-07-14 04:44:36.299089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.299 [2024-07-14 04:44:36.299105] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.299 [2024-07-14 04:44:36.299112] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299119] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624770) on tqpair=0x5cb120 00:28:16.299 [2024-07-14 04:44:36.299187] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.299208] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.299238] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299246] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5cb120) 00:28:16.299 [2024-07-14 04:44:36.299257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.299 [2024-07-14 04:44:36.299278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624770, cid 4, qid 0 00:28:16.299 [2024-07-14 04:44:36.299477] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.299 [2024-07-14 04:44:36.299490] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.299 [2024-07-14 04:44:36.299497] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299503] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5cb120): datao=0, datal=4096, cccid=4 00:28:16.299 [2024-07-14 04:44:36.299511] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x624770) on tqpair(0x5cb120): expected_datao=0, payload_size=4096 00:28:16.299 [2024-07-14 04:44:36.299518] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299529] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299536] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.299 [2024-07-14 04:44:36.299599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.299 [2024-07-14 04:44:36.299606] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299613] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624770) on tqpair=0x5cb120 00:28:16.299 [2024-07-14 04:44:36.299628] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:16.299 [2024-07-14 04:44:36.299645] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.299662] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.299676] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299684] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5cb120) 00:28:16.299 [2024-07-14 04:44:36.299695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.299 [2024-07-14 04:44:36.299720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624770, cid 4, qid 0 00:28:16.299 [2024-07-14 04:44:36.299899] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.299 [2024-07-14 04:44:36.299915] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.299 [2024-07-14 04:44:36.299922] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299929] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5cb120): datao=0, datal=4096, cccid=4 00:28:16.299 [2024-07-14 04:44:36.299937] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x624770) on tqpair(0x5cb120): expected_datao=0, payload_size=4096 00:28:16.299 [2024-07-14 04:44:36.299944] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299955] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.299962] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.300017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.299 [2024-07-14 04:44:36.300029] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.299 [2024-07-14 04:44:36.300036] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.300043] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624770) on tqpair=0x5cb120 00:28:16.299 [2024-07-14 04:44:36.300064] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.300083] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.300097] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.300105] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5cb120) 00:28:16.299 [2024-07-14 04:44:36.300116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.299 [2024-07-14 04:44:36.300138] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624770, cid 4, qid 0 00:28:16.299 [2024-07-14 04:44:36.300304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.299 [2024-07-14 04:44:36.300319] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.299 [2024-07-14 04:44:36.300327] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.300333] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5cb120): datao=0, datal=4096, cccid=4 00:28:16.299 [2024-07-14 04:44:36.300341] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x624770) on tqpair(0x5cb120): expected_datao=0, payload_size=4096 00:28:16.299 [2024-07-14 04:44:36.300348] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.300358] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.300366] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.300410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.299 [2024-07-14 04:44:36.300422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.299 [2024-07-14 04:44:36.300429] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.299 [2024-07-14 04:44:36.300436] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624770) on tqpair=0x5cb120 00:28:16.299 [2024-07-14 04:44:36.300449] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.300463] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.300481] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.300497] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.300506] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.300515] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:16.299 [2024-07-14 04:44:36.300523] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:16.299 [2024-07-14 04:44:36.300532] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:16.300 [2024-07-14 04:44:36.300553] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.300563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5cb120) 00:28:16.300 [2024-07-14 04:44:36.300574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.300 [2024-07-14 04:44:36.300600] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.300607] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.300614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5cb120) 00:28:16.300 [2024-07-14 04:44:36.300623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.300 [2024-07-14 04:44:36.300646] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624770, cid 4, qid 0 00:28:16.300 [2024-07-14 04:44:36.300674] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6248d0, cid 5, qid 0 00:28:16.300 [2024-07-14 04:44:36.300855] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.300 [2024-07-14 04:44:36.300878] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.300 [2024-07-14 04:44:36.300886] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.300893] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624770) on tqpair=0x5cb120 00:28:16.300 [2024-07-14 04:44:36.300903] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.300 [2024-07-14 04:44:36.300913] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.300 [2024-07-14 04:44:36.300920] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.300926] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6248d0) on tqpair=0x5cb120 00:28:16.300 [2024-07-14 04:44:36.300942] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.300952] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5cb120) 00:28:16.300 [2024-07-14 04:44:36.300963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.300 [2024-07-14 04:44:36.300984] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6248d0, cid 5, qid 0 00:28:16.300 [2024-07-14 04:44:36.301149] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.300 [2024-07-14 04:44:36.301164] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.300 [2024-07-14 04:44:36.301172] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.301178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6248d0) on tqpair=0x5cb120 00:28:16.300 [2024-07-14 04:44:36.301194] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.301203] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5cb120) 00:28:16.300 [2024-07-14 04:44:36.301214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.300 [2024-07-14 04:44:36.301239] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6248d0, cid 5, qid 0 00:28:16.300 [2024-07-14 04:44:36.301395] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.300 [2024-07-14 04:44:36.301407] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.300 [2024-07-14 04:44:36.301414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.301421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6248d0) on tqpair=0x5cb120 00:28:16.300 [2024-07-14 04:44:36.301436] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.301445] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5cb120) 00:28:16.300 [2024-07-14 04:44:36.301456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.300 [2024-07-14 04:44:36.301476] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6248d0, cid 5, qid 0 00:28:16.300 [2024-07-14 04:44:36.301636] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.300 [2024-07-14 04:44:36.301649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.300 [2024-07-14 04:44:36.301656] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.301663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6248d0) on tqpair=0x5cb120 00:28:16.300 [2024-07-14 04:44:36.301681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.301691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5cb120) 00:28:16.300 [2024-07-14 04:44:36.301701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.300 [2024-07-14 04:44:36.301713] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.301720] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5cb120) 00:28:16.300 [2024-07-14 04:44:36.301730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.300 [2024-07-14 04:44:36.301741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.301748] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5cb120) 00:28:16.300 [2024-07-14 04:44:36.301773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.300 [2024-07-14 04:44:36.301785] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.301792] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5cb120) 00:28:16.300 [2024-07-14 04:44:36.301800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.300 [2024-07-14 04:44:36.301821] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6248d0, cid 5, qid 0 00:28:16.300 [2024-07-14 04:44:36.301847] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624770, cid 4, qid 0 00:28:16.300 [2024-07-14 04:44:36.301855] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624a30, cid 6, qid 0 00:28:16.300 [2024-07-14 04:44:36.301863] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624b90, cid 7, qid 0 00:28:16.300 [2024-07-14 04:44:36.305890] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.300 [2024-07-14 04:44:36.305903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.300 [2024-07-14 04:44:36.305910] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.305917] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5cb120): datao=0, datal=8192, cccid=5 00:28:16.300 [2024-07-14 04:44:36.305927] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6248d0) on tqpair(0x5cb120): expected_datao=0, payload_size=8192 00:28:16.300 [2024-07-14 04:44:36.305935] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.305945] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.305953] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.305961] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.300 [2024-07-14 04:44:36.305970] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.300 [2024-07-14 04:44:36.305976] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.305982] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5cb120): datao=0, datal=512, cccid=4 00:28:16.300 [2024-07-14 04:44:36.305990] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x624770) on tqpair(0x5cb120): expected_datao=0, payload_size=512 00:28:16.300 [2024-07-14 04:44:36.305997] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.306006] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.306012] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.306020] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.300 [2024-07-14 04:44:36.306029] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.300 [2024-07-14 04:44:36.306036] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.306042] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5cb120): datao=0, datal=512, cccid=6 00:28:16.300 [2024-07-14 04:44:36.306049] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x624a30) on tqpair(0x5cb120): expected_datao=0, payload_size=512 00:28:16.300 [2024-07-14 04:44:36.306056] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.306065] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.300 [2024-07-14 04:44:36.306071] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.301 [2024-07-14 04:44:36.306080] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.301 [2024-07-14 04:44:36.306088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.301 [2024-07-14 04:44:36.306095] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.301 [2024-07-14 04:44:36.306101] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5cb120): datao=0, datal=4096, cccid=7 00:28:16.301 [2024-07-14 04:44:36.306108] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x624b90) on tqpair(0x5cb120): expected_datao=0, payload_size=4096 00:28:16.301 [2024-07-14 04:44:36.306115] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.301 [2024-07-14 04:44:36.306124] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.301 [2024-07-14 04:44:36.306131] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.301 [2024-07-14 04:44:36.306139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.301 [2024-07-14 04:44:36.306162] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.301 [2024-07-14 04:44:36.306169] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.301 [2024-07-14 04:44:36.306175] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6248d0) on tqpair=0x5cb120 00:28:16.301 [2024-07-14 04:44:36.306193] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.301 [2024-07-14 04:44:36.306204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.301 [2024-07-14 04:44:36.306210] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.301 [2024-07-14 04:44:36.306217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624770) on tqpair=0x5cb120 00:28:16.301 [2024-07-14 04:44:36.306230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.301 [2024-07-14 04:44:36.306240] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.301 [2024-07-14 04:44:36.306246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.301 [2024-07-14 04:44:36.306256] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624a30) on tqpair=0x5cb120 00:28:16.301 [2024-07-14 04:44:36.306269] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.301 [2024-07-14 04:44:36.306280] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.301 [2024-07-14 04:44:36.306286] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.301 [2024-07-14 04:44:36.306292] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624b90) on tqpair=0x5cb120 00:28:16.301 ===================================================== 00:28:16.301 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:16.301 ===================================================== 00:28:16.301 Controller Capabilities/Features 00:28:16.301 ================================ 00:28:16.301 Vendor ID: 8086 00:28:16.301 Subsystem Vendor ID: 8086 00:28:16.301 Serial Number: SPDK00000000000001 00:28:16.301 Model Number: SPDK bdev Controller 00:28:16.301 Firmware Version: 24.05.1 00:28:16.301 Recommended Arb Burst: 6 00:28:16.301 IEEE OUI Identifier: e4 d2 5c 00:28:16.301 Multi-path I/O 00:28:16.301 May have multiple subsystem ports: Yes 00:28:16.301 May have multiple controllers: Yes 00:28:16.301 Associated with SR-IOV VF: No 00:28:16.301 Max Data Transfer Size: 131072 00:28:16.301 Max Number of Namespaces: 32 00:28:16.301 Max Number of I/O Queues: 127 00:28:16.301 NVMe Specification Version (VS): 1.3 00:28:16.301 NVMe Specification Version (Identify): 1.3 00:28:16.301 Maximum Queue Entries: 128 00:28:16.301 Contiguous Queues Required: Yes 00:28:16.301 Arbitration Mechanisms Supported 00:28:16.301 Weighted Round Robin: Not Supported 00:28:16.301 Vendor Specific: Not Supported 00:28:16.301 Reset Timeout: 15000 ms 00:28:16.301 Doorbell Stride: 4 bytes 00:28:16.301 NVM Subsystem Reset: Not Supported 00:28:16.301 Command Sets Supported 00:28:16.301 NVM Command Set: Supported 00:28:16.301 Boot Partition: Not Supported 00:28:16.301 Memory Page Size Minimum: 4096 bytes 00:28:16.301 Memory Page Size Maximum: 4096 bytes 00:28:16.301 Persistent Memory Region: Not Supported 00:28:16.301 Optional Asynchronous Events Supported 00:28:16.301 Namespace Attribute Notices: Supported 00:28:16.301 Firmware Activation Notices: Not Supported 00:28:16.301 ANA Change Notices: Not Supported 00:28:16.301 PLE Aggregate Log Change Notices: Not Supported 00:28:16.301 LBA Status Info Alert Notices: Not Supported 00:28:16.301 EGE Aggregate Log Change Notices: Not Supported 00:28:16.301 Normal NVM Subsystem Shutdown event: Not Supported 00:28:16.301 Zone Descriptor Change Notices: Not Supported 00:28:16.301 Discovery Log Change Notices: Not Supported 00:28:16.301 Controller Attributes 00:28:16.301 128-bit Host Identifier: Supported 00:28:16.301 Non-Operational Permissive Mode: Not Supported 00:28:16.301 NVM Sets: Not Supported 00:28:16.301 Read Recovery Levels: Not Supported 00:28:16.301 Endurance Groups: Not Supported 00:28:16.301 Predictable Latency Mode: Not Supported 00:28:16.301 Traffic Based Keep ALive: Not Supported 00:28:16.301 Namespace Granularity: Not Supported 00:28:16.301 SQ Associations: Not Supported 00:28:16.301 UUID List: Not Supported 00:28:16.301 Multi-Domain Subsystem: Not Supported 00:28:16.301 Fixed Capacity Management: Not Supported 00:28:16.301 Variable Capacity Management: Not Supported 00:28:16.301 Delete Endurance Group: Not Supported 00:28:16.301 Delete NVM Set: Not Supported 00:28:16.301 Extended LBA Formats Supported: Not Supported 00:28:16.301 Flexible Data Placement Supported: Not Supported 00:28:16.301 00:28:16.301 Controller Memory Buffer Support 00:28:16.301 ================================ 00:28:16.301 Supported: No 00:28:16.301 00:28:16.301 Persistent Memory Region Support 00:28:16.301 ================================ 00:28:16.301 Supported: No 00:28:16.301 00:28:16.301 Admin Command Set Attributes 00:28:16.301 ============================ 00:28:16.301 Security Send/Receive: Not Supported 00:28:16.301 Format NVM: Not Supported 00:28:16.301 Firmware Activate/Download: Not Supported 00:28:16.301 Namespace Management: Not Supported 00:28:16.301 Device Self-Test: Not Supported 00:28:16.301 Directives: Not Supported 00:28:16.301 NVMe-MI: Not Supported 00:28:16.301 Virtualization Management: Not Supported 00:28:16.301 Doorbell Buffer Config: Not Supported 00:28:16.301 Get LBA Status Capability: Not Supported 00:28:16.301 Command & Feature Lockdown Capability: Not Supported 00:28:16.301 Abort Command Limit: 4 00:28:16.301 Async Event Request Limit: 4 00:28:16.301 Number of Firmware Slots: N/A 00:28:16.301 Firmware Slot 1 Read-Only: N/A 00:28:16.301 Firmware Activation Without Reset: N/A 00:28:16.301 Multiple Update Detection Support: N/A 00:28:16.301 Firmware Update Granularity: No Information Provided 00:28:16.301 Per-Namespace SMART Log: No 00:28:16.301 Asymmetric Namespace Access Log Page: Not Supported 00:28:16.301 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:16.301 Command Effects Log Page: Supported 00:28:16.301 Get Log Page Extended Data: Supported 00:28:16.301 Telemetry Log Pages: Not Supported 00:28:16.301 Persistent Event Log Pages: Not Supported 00:28:16.301 Supported Log Pages Log Page: May Support 00:28:16.301 Commands Supported & Effects Log Page: Not Supported 00:28:16.301 Feature Identifiers & Effects Log Page:May Support 00:28:16.301 NVMe-MI Commands & Effects Log Page: May Support 00:28:16.301 Data Area 4 for Telemetry Log: Not Supported 00:28:16.301 Error Log Page Entries Supported: 128 00:28:16.301 Keep Alive: Supported 00:28:16.301 Keep Alive Granularity: 10000 ms 00:28:16.301 00:28:16.301 NVM Command Set Attributes 00:28:16.301 ========================== 00:28:16.301 Submission Queue Entry Size 00:28:16.301 Max: 64 00:28:16.301 Min: 64 00:28:16.301 Completion Queue Entry Size 00:28:16.301 Max: 16 00:28:16.301 Min: 16 00:28:16.301 Number of Namespaces: 32 00:28:16.301 Compare Command: Supported 00:28:16.301 Write Uncorrectable Command: Not Supported 00:28:16.301 Dataset Management Command: Supported 00:28:16.301 Write Zeroes Command: Supported 00:28:16.301 Set Features Save Field: Not Supported 00:28:16.301 Reservations: Supported 00:28:16.301 Timestamp: Not Supported 00:28:16.301 Copy: Supported 00:28:16.301 Volatile Write Cache: Present 00:28:16.301 Atomic Write Unit (Normal): 1 00:28:16.301 Atomic Write Unit (PFail): 1 00:28:16.301 Atomic Compare & Write Unit: 1 00:28:16.301 Fused Compare & Write: Supported 00:28:16.301 Scatter-Gather List 00:28:16.301 SGL Command Set: Supported 00:28:16.301 SGL Keyed: Supported 00:28:16.301 SGL Bit Bucket Descriptor: Not Supported 00:28:16.301 SGL Metadata Pointer: Not Supported 00:28:16.301 Oversized SGL: Not Supported 00:28:16.301 SGL Metadata Address: Not Supported 00:28:16.301 SGL Offset: Supported 00:28:16.301 Transport SGL Data Block: Not Supported 00:28:16.301 Replay Protected Memory Block: Not Supported 00:28:16.301 00:28:16.301 Firmware Slot Information 00:28:16.301 ========================= 00:28:16.301 Active slot: 1 00:28:16.301 Slot 1 Firmware Revision: 24.05.1 00:28:16.301 00:28:16.301 00:28:16.301 Commands Supported and Effects 00:28:16.301 ============================== 00:28:16.301 Admin Commands 00:28:16.301 -------------- 00:28:16.301 Get Log Page (02h): Supported 00:28:16.301 Identify (06h): Supported 00:28:16.301 Abort (08h): Supported 00:28:16.301 Set Features (09h): Supported 00:28:16.301 Get Features (0Ah): Supported 00:28:16.301 Asynchronous Event Request (0Ch): Supported 00:28:16.301 Keep Alive (18h): Supported 00:28:16.301 I/O Commands 00:28:16.301 ------------ 00:28:16.301 Flush (00h): Supported LBA-Change 00:28:16.301 Write (01h): Supported LBA-Change 00:28:16.301 Read (02h): Supported 00:28:16.301 Compare (05h): Supported 00:28:16.301 Write Zeroes (08h): Supported LBA-Change 00:28:16.302 Dataset Management (09h): Supported LBA-Change 00:28:16.302 Copy (19h): Supported LBA-Change 00:28:16.302 Unknown (79h): Supported LBA-Change 00:28:16.302 Unknown (7Ah): Supported 00:28:16.302 00:28:16.302 Error Log 00:28:16.302 ========= 00:28:16.302 00:28:16.302 Arbitration 00:28:16.302 =========== 00:28:16.302 Arbitration Burst: 1 00:28:16.302 00:28:16.302 Power Management 00:28:16.302 ================ 00:28:16.302 Number of Power States: 1 00:28:16.302 Current Power State: Power State #0 00:28:16.302 Power State #0: 00:28:16.302 Max Power: 0.00 W 00:28:16.302 Non-Operational State: Operational 00:28:16.302 Entry Latency: Not Reported 00:28:16.302 Exit Latency: Not Reported 00:28:16.302 Relative Read Throughput: 0 00:28:16.302 Relative Read Latency: 0 00:28:16.302 Relative Write Throughput: 0 00:28:16.302 Relative Write Latency: 0 00:28:16.302 Idle Power: Not Reported 00:28:16.302 Active Power: Not Reported 00:28:16.302 Non-Operational Permissive Mode: Not Supported 00:28:16.302 00:28:16.302 Health Information 00:28:16.302 ================== 00:28:16.302 Critical Warnings: 00:28:16.302 Available Spare Space: OK 00:28:16.302 Temperature: OK 00:28:16.302 Device Reliability: OK 00:28:16.302 Read Only: No 00:28:16.302 Volatile Memory Backup: OK 00:28:16.302 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:16.302 Temperature Threshold: [2024-07-14 04:44:36.306421] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.306434] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5cb120) 00:28:16.302 [2024-07-14 04:44:36.306445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.302 [2024-07-14 04:44:36.306467] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624b90, cid 7, qid 0 00:28:16.302 [2024-07-14 04:44:36.306674] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.302 [2024-07-14 04:44:36.306687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.302 [2024-07-14 04:44:36.306695] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.306701] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624b90) on tqpair=0x5cb120 00:28:16.302 [2024-07-14 04:44:36.306739] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:16.302 [2024-07-14 04:44:36.306761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.302 [2024-07-14 04:44:36.306773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.302 [2024-07-14 04:44:36.306783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.302 [2024-07-14 04:44:36.306808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.302 [2024-07-14 04:44:36.306820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.306828] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.306834] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5cb120) 00:28:16.302 [2024-07-14 04:44:36.306844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.302 [2024-07-14 04:44:36.306895] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624610, cid 3, qid 0 00:28:16.302 [2024-07-14 04:44:36.307067] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.302 [2024-07-14 04:44:36.307080] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.302 [2024-07-14 04:44:36.307087] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.307094] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624610) on tqpair=0x5cb120 00:28:16.302 [2024-07-14 04:44:36.307105] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.307113] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.307120] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5cb120) 00:28:16.302 [2024-07-14 04:44:36.307130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.302 [2024-07-14 04:44:36.307155] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624610, cid 3, qid 0 00:28:16.302 [2024-07-14 04:44:36.310878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.302 [2024-07-14 04:44:36.310895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.302 [2024-07-14 04:44:36.310903] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.310913] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624610) on tqpair=0x5cb120 00:28:16.302 [2024-07-14 04:44:36.310922] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:16.302 [2024-07-14 04:44:36.310929] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:16.302 [2024-07-14 04:44:36.310947] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.310972] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.310979] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5cb120) 00:28:16.302 [2024-07-14 04:44:36.310990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.302 [2024-07-14 04:44:36.311012] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x624610, cid 3, qid 0 00:28:16.302 [2024-07-14 04:44:36.311194] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.302 [2024-07-14 04:44:36.311210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.302 [2024-07-14 04:44:36.311217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.302 [2024-07-14 04:44:36.311224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x624610) on tqpair=0x5cb120 00:28:16.302 [2024-07-14 04:44:36.311237] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:28:16.302 0 Kelvin (-273 Celsius) 00:28:16.302 Available Spare: 0% 00:28:16.302 Available Spare Threshold: 0% 00:28:16.302 Life Percentage Used: 0% 00:28:16.302 Data Units Read: 0 00:28:16.302 Data Units Written: 0 00:28:16.302 Host Read Commands: 0 00:28:16.302 Host Write Commands: 0 00:28:16.302 Controller Busy Time: 0 minutes 00:28:16.302 Power Cycles: 0 00:28:16.303 Power On Hours: 0 hours 00:28:16.303 Unsafe Shutdowns: 0 00:28:16.303 Unrecoverable Media Errors: 0 00:28:16.303 Lifetime Error Log Entries: 0 00:28:16.303 Warning Temperature Time: 0 minutes 00:28:16.303 Critical Temperature Time: 0 minutes 00:28:16.303 00:28:16.303 Number of Queues 00:28:16.303 ================ 00:28:16.303 Number of I/O Submission Queues: 127 00:28:16.303 Number of I/O Completion Queues: 127 00:28:16.303 00:28:16.303 Active Namespaces 00:28:16.303 ================= 00:28:16.303 Namespace ID:1 00:28:16.303 Error Recovery Timeout: Unlimited 00:28:16.303 Command Set Identifier: NVM (00h) 00:28:16.303 Deallocate: Supported 00:28:16.303 Deallocated/Unwritten Error: Not Supported 00:28:16.303 Deallocated Read Value: Unknown 00:28:16.303 Deallocate in Write Zeroes: Not Supported 00:28:16.303 Deallocated Guard Field: 0xFFFF 00:28:16.303 Flush: Supported 00:28:16.303 Reservation: Supported 00:28:16.303 Namespace Sharing Capabilities: Multiple Controllers 00:28:16.303 Size (in LBAs): 131072 (0GiB) 00:28:16.303 Capacity (in LBAs): 131072 (0GiB) 00:28:16.303 Utilization (in LBAs): 131072 (0GiB) 00:28:16.303 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:16.303 EUI64: ABCDEF0123456789 00:28:16.303 UUID: 7646b0fb-2378-4702-99b9-a614e3d776fd 00:28:16.303 Thin Provisioning: Not Supported 00:28:16.303 Per-NS Atomic Units: Yes 00:28:16.303 Atomic Boundary Size (Normal): 0 00:28:16.303 Atomic Boundary Size (PFail): 0 00:28:16.303 Atomic Boundary Offset: 0 00:28:16.303 Maximum Single Source Range Length: 65535 00:28:16.303 Maximum Copy Length: 65535 00:28:16.303 Maximum Source Range Count: 1 00:28:16.303 NGUID/EUI64 Never Reused: No 00:28:16.303 Namespace Write Protected: No 00:28:16.303 Number of LBA Formats: 1 00:28:16.303 Current LBA Format: LBA Format #00 00:28:16.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:16.303 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:16.303 rmmod nvme_tcp 00:28:16.303 rmmod nvme_fabrics 00:28:16.303 rmmod nvme_keyring 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2886400 ']' 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2886400 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 2886400 ']' 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 2886400 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2886400 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2886400' 00:28:16.303 killing process with pid 2886400 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 2886400 00:28:16.303 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 2886400 00:28:16.562 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:16.562 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:16.562 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:16.562 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.562 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:16.562 04:44:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.562 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.562 04:44:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.095 04:44:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.095 00:28:19.095 real 0m5.354s 00:28:19.095 user 0m4.286s 00:28:19.095 sys 0m1.869s 00:28:19.095 04:44:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.095 04:44:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.095 ************************************ 00:28:19.095 END TEST nvmf_identify 00:28:19.095 ************************************ 00:28:19.095 04:44:38 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:19.095 04:44:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.095 04:44:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.095 04:44:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.095 ************************************ 00:28:19.095 START TEST nvmf_perf 00:28:19.095 ************************************ 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:19.095 * Looking for test storage... 00:28:19.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:19.095 04:44:38 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.096 04:44:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:21.009 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.009 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.009 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.009 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:21.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:21.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:21.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:21.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:28:21.010 00:28:21.010 --- 10.0.0.2 ping statistics --- 00:28:21.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.010 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:28:21.010 00:28:21.010 --- 10.0.0.1 ping statistics --- 00:28:21.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.010 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2888383 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2888383 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 2888383 ']' 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:21.010 04:44:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:21.010 [2024-07-14 04:44:40.974455] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:21.010 [2024-07-14 04:44:40.974528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.010 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.010 [2024-07-14 04:44:41.043448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.010 [2024-07-14 04:44:41.134509] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.010 [2024-07-14 04:44:41.134575] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.010 [2024-07-14 04:44:41.134592] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.010 [2024-07-14 04:44:41.134605] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.010 [2024-07-14 04:44:41.134617] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.010 [2024-07-14 04:44:41.134680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.011 [2024-07-14 04:44:41.134747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.011 [2024-07-14 04:44:41.134904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.011 [2024-07-14 04:44:41.134907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.268 04:44:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:21.268 04:44:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:21.268 04:44:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:21.268 04:44:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.268 04:44:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:21.268 04:44:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.268 04:44:41 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:21.268 04:44:41 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:24.544 04:44:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:24.544 04:44:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:24.544 04:44:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:24.544 04:44:44 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:24.801 04:44:44 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:24.801 04:44:44 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:24.801 04:44:44 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:24.801 04:44:44 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:24.801 04:44:44 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:25.058 [2024-07-14 04:44:45.145371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.058 04:44:45 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:25.315 04:44:45 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:25.315 04:44:45 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:25.572 04:44:45 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:25.572 04:44:45 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:25.829 04:44:45 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.087 [2024-07-14 04:44:46.141027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.087 04:44:46 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:26.343 04:44:46 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:26.343 04:44:46 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:26.343 04:44:46 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:26.343 04:44:46 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:27.714 Initializing NVMe Controllers 00:28:27.714 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:27.714 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:27.714 Initialization complete. Launching workers. 00:28:27.714 ======================================================== 00:28:27.714 Latency(us) 00:28:27.714 Device Information : IOPS MiB/s Average min max 00:28:27.714 PCIE (0000:88:00.0) NSID 1 from core 0: 83644.40 326.74 382.02 43.39 4331.30 00:28:27.714 ======================================================== 00:28:27.714 Total : 83644.40 326.74 382.02 43.39 4331.30 00:28:27.714 00:28:27.714 04:44:47 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.714 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.646 Initializing NVMe Controllers 00:28:28.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:28.646 Initialization complete. Launching workers. 00:28:28.646 ======================================================== 00:28:28.646 Latency(us) 00:28:28.646 Device Information : IOPS MiB/s Average min max 00:28:28.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 109.00 0.43 9389.17 209.85 45771.46 00:28:28.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15264.72 7925.19 50880.44 00:28:28.646 ======================================================== 00:28:28.646 Total : 175.00 0.68 11605.09 209.85 50880.44 00:28:28.646 00:28:28.903 04:44:48 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.903 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.347 Initializing NVMe Controllers 00:28:30.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:30.347 Initialization complete. Launching workers. 00:28:30.347 ======================================================== 00:28:30.347 Latency(us) 00:28:30.347 Device Information : IOPS MiB/s Average min max 00:28:30.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8414.00 32.87 3806.08 473.31 7951.54 00:28:30.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3863.00 15.09 8328.25 6856.44 15591.84 00:28:30.347 ======================================================== 00:28:30.347 Total : 12277.00 47.96 5229.00 473.31 15591.84 00:28:30.347 00:28:30.347 04:44:50 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:30.347 04:44:50 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:30.347 04:44:50 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.347 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.877 Initializing NVMe Controllers 00:28:32.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.877 Controller IO queue size 128, less than required. 00:28:32.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.877 Controller IO queue size 128, less than required. 00:28:32.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:32.877 Initialization complete. Launching workers. 00:28:32.877 ======================================================== 00:28:32.877 Latency(us) 00:28:32.877 Device Information : IOPS MiB/s Average min max 00:28:32.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 858.96 214.74 153023.18 61638.56 251209.11 00:28:32.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 557.15 139.29 235964.56 113006.81 377275.68 00:28:32.877 ======================================================== 00:28:32.877 Total : 1416.12 354.03 185655.37 61638.56 377275.68 00:28:32.877 00:28:32.877 04:44:52 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:32.877 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.134 No valid NVMe controllers or AIO or URING devices found 00:28:33.134 Initializing NVMe Controllers 00:28:33.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.134 Controller IO queue size 128, less than required. 00:28:33.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:33.134 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:33.134 Controller IO queue size 128, less than required. 00:28:33.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:33.134 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:33.134 WARNING: Some requested NVMe devices were skipped 00:28:33.134 04:44:53 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:33.134 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.412 Initializing NVMe Controllers 00:28:36.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.412 Controller IO queue size 128, less than required. 00:28:36.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.412 Controller IO queue size 128, less than required. 00:28:36.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:36.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:36.412 Initialization complete. Launching workers. 00:28:36.412 00:28:36.412 ==================== 00:28:36.412 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:36.412 TCP transport: 00:28:36.412 polls: 31773 00:28:36.412 idle_polls: 9299 00:28:36.412 sock_completions: 22474 00:28:36.412 nvme_completions: 3573 00:28:36.412 submitted_requests: 5364 00:28:36.412 queued_requests: 1 00:28:36.412 00:28:36.412 ==================== 00:28:36.412 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:36.412 TCP transport: 00:28:36.412 polls: 34986 00:28:36.412 idle_polls: 12799 00:28:36.412 sock_completions: 22187 00:28:36.412 nvme_completions: 3681 00:28:36.412 submitted_requests: 5584 00:28:36.412 queued_requests: 1 00:28:36.412 ======================================================== 00:28:36.412 Latency(us) 00:28:36.412 Device Information : IOPS MiB/s Average min max 00:28:36.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 892.04 223.01 148268.31 93300.26 216113.30 00:28:36.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 919.01 229.75 142050.03 63259.07 184274.91 00:28:36.413 ======================================================== 00:28:36.413 Total : 1811.05 452.76 145112.87 63259.07 216113.30 00:28:36.413 00:28:36.413 04:44:55 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:36.413 04:44:55 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:36.413 04:44:56 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:36.413 04:44:56 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:36.413 04:44:56 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=6350bb7c-5007-4376-a7bc-baa3ff10b2c8 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 6350bb7c-5007-4376-a7bc-baa3ff10b2c8 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=6350bb7c-5007-4376-a7bc-baa3ff10b2c8 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:39.690 { 00:28:39.690 "uuid": "6350bb7c-5007-4376-a7bc-baa3ff10b2c8", 00:28:39.690 "name": "lvs_0", 00:28:39.690 "base_bdev": "Nvme0n1", 00:28:39.690 "total_data_clusters": 238234, 00:28:39.690 "free_clusters": 238234, 00:28:39.690 "block_size": 512, 00:28:39.690 "cluster_size": 4194304 00:28:39.690 } 00:28:39.690 ]' 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="6350bb7c-5007-4376-a7bc-baa3ff10b2c8") .free_clusters' 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="6350bb7c-5007-4376-a7bc-baa3ff10b2c8") .cluster_size' 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:39.690 952936 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:39.690 04:44:59 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6350bb7c-5007-4376-a7bc-baa3ff10b2c8 lbd_0 20480 00:28:39.948 04:45:00 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=016d2838-f009-48f5-be11-2de7ed46ca12 00:28:39.948 04:45:00 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 016d2838-f009-48f5-be11-2de7ed46ca12 lvs_n_0 00:28:40.881 04:45:01 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=fbaed988-bd81-4c5c-b81f-23c19e9a41d1 00:28:40.881 04:45:01 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb fbaed988-bd81-4c5c-b81f-23c19e9a41d1 00:28:40.881 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=fbaed988-bd81-4c5c-b81f-23c19e9a41d1 00:28:40.881 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:40.881 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:40.881 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:40.881 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:41.139 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:41.139 { 00:28:41.139 "uuid": "6350bb7c-5007-4376-a7bc-baa3ff10b2c8", 00:28:41.139 "name": "lvs_0", 00:28:41.139 "base_bdev": "Nvme0n1", 00:28:41.139 "total_data_clusters": 238234, 00:28:41.139 "free_clusters": 233114, 00:28:41.139 "block_size": 512, 00:28:41.139 "cluster_size": 4194304 00:28:41.139 }, 00:28:41.139 { 00:28:41.139 "uuid": "fbaed988-bd81-4c5c-b81f-23c19e9a41d1", 00:28:41.139 "name": "lvs_n_0", 00:28:41.139 "base_bdev": "016d2838-f009-48f5-be11-2de7ed46ca12", 00:28:41.139 "total_data_clusters": 5114, 00:28:41.139 "free_clusters": 5114, 00:28:41.139 "block_size": 512, 00:28:41.139 "cluster_size": 4194304 00:28:41.139 } 00:28:41.139 ]' 00:28:41.139 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="fbaed988-bd81-4c5c-b81f-23c19e9a41d1") .free_clusters' 00:28:41.398 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:41.398 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="fbaed988-bd81-4c5c-b81f-23c19e9a41d1") .cluster_size' 00:28:41.398 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:41.398 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:41.398 04:45:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:41.398 20456 00:28:41.398 04:45:01 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:41.398 04:45:01 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fbaed988-bd81-4c5c-b81f-23c19e9a41d1 lbd_nest_0 20456 00:28:41.656 04:45:01 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=72cb86e2-6656-4655-98fc-f5b6c36b932c 00:28:41.656 04:45:01 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.913 04:45:01 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:41.913 04:45:01 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 72cb86e2-6656-4655-98fc-f5b6c36b932c 00:28:42.171 04:45:02 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.429 04:45:02 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:42.429 04:45:02 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:42.429 04:45:02 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:42.429 04:45:02 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:42.429 04:45:02 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.429 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.618 Initializing NVMe Controllers 00:28:54.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:54.618 Initialization complete. Launching workers. 00:28:54.618 ======================================================== 00:28:54.618 Latency(us) 00:28:54.618 Device Information : IOPS MiB/s Average min max 00:28:54.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.19 0.02 23201.93 240.00 47885.54 00:28:54.618 ======================================================== 00:28:54.618 Total : 43.19 0.02 23201.93 240.00 47885.54 00:28:54.618 00:28:54.618 04:45:12 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:54.618 04:45:12 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.618 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.609 Initializing NVMe Controllers 00:29:04.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:04.609 Initialization complete. Launching workers. 00:29:04.609 ======================================================== 00:29:04.609 Latency(us) 00:29:04.609 Device Information : IOPS MiB/s Average min max 00:29:04.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.20 10.53 11881.28 6011.78 47870.20 00:29:04.609 ======================================================== 00:29:04.609 Total : 84.20 10.53 11881.28 6011.78 47870.20 00:29:04.609 00:29:04.609 04:45:23 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:04.609 04:45:23 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:04.609 04:45:23 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.609 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.577 Initializing NVMe Controllers 00:29:14.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:14.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:14.577 Initialization complete. Launching workers. 00:29:14.577 ======================================================== 00:29:14.577 Latency(us) 00:29:14.577 Device Information : IOPS MiB/s Average min max 00:29:14.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6979.94 3.41 4584.07 324.03 12065.13 00:29:14.577 ======================================================== 00:29:14.577 Total : 6979.94 3.41 4584.07 324.03 12065.13 00:29:14.577 00:29:14.577 04:45:33 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:14.577 04:45:33 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:14.577 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.537 Initializing NVMe Controllers 00:29:24.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:24.537 Initialization complete. Launching workers. 00:29:24.537 ======================================================== 00:29:24.537 Latency(us) 00:29:24.537 Device Information : IOPS MiB/s Average min max 00:29:24.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1709.30 213.66 18736.33 1993.69 41137.19 00:29:24.537 ======================================================== 00:29:24.537 Total : 1709.30 213.66 18736.33 1993.69 41137.19 00:29:24.537 00:29:24.537 04:45:43 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:24.537 04:45:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:24.537 04:45:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.537 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.496 Initializing NVMe Controllers 00:29:34.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.496 Controller IO queue size 128, less than required. 00:29:34.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.496 Initialization complete. Launching workers. 00:29:34.496 ======================================================== 00:29:34.496 Latency(us) 00:29:34.496 Device Information : IOPS MiB/s Average min max 00:29:34.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11854.97 5.79 10801.74 1858.26 23201.55 00:29:34.496 ======================================================== 00:29:34.496 Total : 11854.97 5.79 10801.74 1858.26 23201.55 00:29:34.496 00:29:34.496 04:45:54 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:34.496 04:45:54 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.496 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.494 Initializing NVMe Controllers 00:29:44.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.494 Controller IO queue size 128, less than required. 00:29:44.494 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:44.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:44.494 Initialization complete. Launching workers. 00:29:44.494 ======================================================== 00:29:44.494 Latency(us) 00:29:44.494 Device Information : IOPS MiB/s Average min max 00:29:44.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.00 150.50 106772.59 22948.97 214494.35 00:29:44.494 ======================================================== 00:29:44.494 Total : 1204.00 150.50 106772.59 22948.97 214494.35 00:29:44.494 00:29:44.494 04:46:04 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.494 04:46:04 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 72cb86e2-6656-4655-98fc-f5b6c36b932c 00:29:45.425 04:46:05 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:45.683 04:46:05 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 016d2838-f009-48f5-be11-2de7ed46ca12 00:29:45.940 04:46:05 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:46.198 rmmod nvme_tcp 00:29:46.198 rmmod nvme_fabrics 00:29:46.198 rmmod nvme_keyring 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2888383 ']' 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2888383 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 2888383 ']' 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 2888383 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2888383 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2888383' 00:29:46.198 killing process with pid 2888383 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 2888383 00:29:46.198 04:46:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 2888383 00:29:48.095 04:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:48.095 04:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:48.095 04:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:48.095 04:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:48.095 04:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:48.095 04:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.095 04:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.095 04:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.003 04:46:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:50.003 00:29:50.003 real 1m31.169s 00:29:50.003 user 5m34.161s 00:29:50.003 sys 0m16.420s 00:29:50.003 04:46:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:50.003 04:46:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:50.003 ************************************ 00:29:50.003 END TEST nvmf_perf 00:29:50.003 ************************************ 00:29:50.003 04:46:09 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:50.003 04:46:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:50.003 04:46:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:50.003 04:46:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.003 ************************************ 00:29:50.003 START TEST nvmf_fio_host 00:29:50.003 ************************************ 00:29:50.003 04:46:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:50.003 * Looking for test storage... 00:29:50.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:50.003 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:50.004 04:46:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:51.904 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:51.904 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:51.904 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.904 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:51.905 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.905 04:46:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:51.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:29:51.905 00:29:51.905 --- 10.0.0.2 ping statistics --- 00:29:51.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.905 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:51.905 00:29:51.905 --- 10.0.0.1 ping statistics --- 00:29:51.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.905 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:51.905 04:46:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2901080 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2901080 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 2901080 ']' 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:52.163 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.163 [2024-07-14 04:46:12.150361] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:52.163 [2024-07-14 04:46:12.150448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.163 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.163 [2024-07-14 04:46:12.213526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.163 [2024-07-14 04:46:12.298891] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.163 [2024-07-14 04:46:12.298955] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.163 [2024-07-14 04:46:12.298969] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.163 [2024-07-14 04:46:12.298980] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.163 [2024-07-14 04:46:12.298989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.164 [2024-07-14 04:46:12.299045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.164 [2024-07-14 04:46:12.299104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.164 [2024-07-14 04:46:12.299170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.164 [2024-07-14 04:46:12.299172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.422 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:52.422 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:52.422 04:46:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:52.681 [2024-07-14 04:46:12.695521] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.681 04:46:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:52.681 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.681 04:46:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.681 04:46:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:52.938 Malloc1 00:29:52.938 04:46:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.205 04:46:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:53.466 04:46:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.724 [2024-07-14 04:46:13.803014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.724 04:46:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.980 04:46:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:53.980 04:46:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.980 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.980 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:53.980 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:53.980 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:53.981 04:46:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:54.238 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:54.238 fio-3.35 00:29:54.238 Starting 1 thread 00:29:54.238 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.769 00:29:56.769 test: (groupid=0, jobs=1): err= 0: pid=2901437: Sun Jul 14 04:46:16 2024 00:29:56.769 read: IOPS=9212, BW=36.0MiB/s (37.7MB/s)(72.2MiB/2006msec) 00:29:56.769 slat (nsec): min=1829, max=104749, avg=2471.21, stdev=1444.33 00:29:56.769 clat (usec): min=3013, max=12618, avg=7694.10, stdev=555.24 00:29:56.769 lat (usec): min=3035, max=12621, avg=7696.57, stdev=555.17 00:29:56.769 clat percentiles (usec): 00:29:56.769 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7242], 00:29:56.769 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:29:56.769 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:29:56.769 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11600], 99.95th=[11994], 00:29:56.769 | 99.99th=[12518] 00:29:56.769 bw ( KiB/s): min=35832, max=37544, per=99.91%, avg=36816.00, stdev=716.29, samples=4 00:29:56.769 iops : min= 8958, max= 9386, avg=9204.00, stdev=179.07, samples=4 00:29:56.769 write: IOPS=9216, BW=36.0MiB/s (37.8MB/s)(72.2MiB/2006msec); 0 zone resets 00:29:56.769 slat (nsec): min=1992, max=93526, avg=2585.92, stdev=1210.09 00:29:56.769 clat (usec): min=1221, max=12200, avg=6153.04, stdev=495.17 00:29:56.769 lat (usec): min=1227, max=12202, avg=6155.63, stdev=495.12 00:29:56.769 clat percentiles (usec): 00:29:56.769 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:29:56.769 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:29:56.769 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:29:56.769 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 9241], 99.95th=[11076], 00:29:56.769 | 99.99th=[11863] 00:29:56.769 bw ( KiB/s): min=36688, max=37056, per=100.00%, avg=36868.00, stdev=159.13, samples=4 00:29:56.769 iops : min= 9172, max= 9264, avg=9217.00, stdev=39.78, samples=4 00:29:56.769 lat (msec) : 2=0.01%, 4=0.12%, 10=99.75%, 20=0.12% 00:29:56.769 cpu : usr=53.62%, sys=37.46%, ctx=61, majf=0, minf=31 00:29:56.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:56.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:56.769 issued rwts: total=18480,18489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.769 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:56.769 00:29:56.769 Run status group 0 (all jobs): 00:29:56.769 READ: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.2MiB (75.7MB), run=2006-2006msec 00:29:56.769 WRITE: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.2MiB (75.7MB), run=2006-2006msec 00:29:56.769 04:46:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.769 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.769 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:56.769 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.769 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:56.770 04:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.770 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:56.770 fio-3.35 00:29:56.770 Starting 1 thread 00:29:56.770 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.299 00:29:59.299 test: (groupid=0, jobs=1): err= 0: pid=2901766: Sun Jul 14 04:46:19 2024 00:29:59.299 read: IOPS=7441, BW=116MiB/s (122MB/s)(233MiB/2005msec) 00:29:59.299 slat (usec): min=2, max=110, avg= 3.68, stdev= 1.75 00:29:59.299 clat (usec): min=3446, max=24963, avg=10519.41, stdev=2927.09 00:29:59.299 lat (usec): min=3450, max=24967, avg=10523.09, stdev=2927.26 00:29:59.299 clat percentiles (usec): 00:29:59.299 | 1.00th=[ 5014], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7898], 00:29:59.299 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[11076], 00:29:59.299 | 70.00th=[11863], 80.00th=[12911], 90.00th=[14484], 95.00th=[15664], 00:29:59.299 | 99.00th=[18220], 99.50th=[19530], 99.90th=[20579], 99.95th=[20579], 00:29:59.299 | 99.99th=[22414] 00:29:59.299 bw ( KiB/s): min=54912, max=67008, per=50.90%, avg=60608.00, stdev=5589.41, samples=4 00:29:59.299 iops : min= 3432, max= 4188, avg=3788.00, stdev=349.34, samples=4 00:29:59.299 write: IOPS=4431, BW=69.2MiB/s (72.6MB/s)(124MiB/1790msec); 0 zone resets 00:29:59.299 slat (usec): min=30, max=257, avg=34.14, stdev= 6.07 00:29:59.299 clat (usec): min=4678, max=23141, avg=11891.98, stdev=2285.31 00:29:59.299 lat (usec): min=4710, max=23178, avg=11926.12, stdev=2286.70 00:29:59.299 clat percentiles (usec): 00:29:59.299 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9765], 00:29:59.299 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11731], 60.00th=[12256], 00:29:59.299 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15008], 95.00th=[15926], 00:29:59.299 | 99.00th=[17957], 99.50th=[18744], 99.90th=[19792], 99.95th=[22938], 00:29:59.299 | 99.99th=[23200] 00:29:59.299 bw ( KiB/s): min=56320, max=70208, per=88.72%, avg=62904.00, stdev=6167.20, samples=4 00:29:59.299 iops : min= 3520, max= 4388, avg=3931.50, stdev=385.45, samples=4 00:29:59.299 lat (msec) : 4=0.04%, 10=37.38%, 20=62.31%, 50=0.27% 00:29:59.299 cpu : usr=74.05%, sys=21.61%, ctx=13, majf=0, minf=51 00:29:59.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:59.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:59.299 issued rwts: total=14920,7932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.299 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:59.299 00:29:59.299 Run status group 0 (all jobs): 00:29:59.299 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=233MiB (244MB), run=2005-2005msec 00:29:59.299 WRITE: bw=69.2MiB/s (72.6MB/s), 69.2MiB/s-69.2MiB/s (72.6MB/s-72.6MB/s), io=124MiB (130MB), run=1790-1790msec 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:29:59.299 04:46:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:02.636 Nvme0n1 00:30:02.636 04:46:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:05.167 04:46:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d554374c-e09a-4b82-9431-521697057959 00:30:05.167 04:46:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d554374c-e09a-4b82-9431-521697057959 00:30:05.167 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=d554374c-e09a-4b82-9431-521697057959 00:30:05.167 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:05.167 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:05.167 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:05.167 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:05.426 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:05.426 { 00:30:05.426 "uuid": "d554374c-e09a-4b82-9431-521697057959", 00:30:05.426 "name": "lvs_0", 00:30:05.426 "base_bdev": "Nvme0n1", 00:30:05.426 "total_data_clusters": 930, 00:30:05.426 "free_clusters": 930, 00:30:05.426 "block_size": 512, 00:30:05.426 "cluster_size": 1073741824 00:30:05.426 } 00:30:05.426 ]' 00:30:05.426 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d554374c-e09a-4b82-9431-521697057959") .free_clusters' 00:30:05.426 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:05.426 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d554374c-e09a-4b82-9431-521697057959") .cluster_size' 00:30:05.685 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:05.685 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:05.685 04:46:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:05.685 952320 00:30:05.685 04:46:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:05.945 143f8a4b-70e9-403c-8021-d53b1d8f82e2 00:30:05.945 04:46:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:06.203 04:46:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:06.460 04:46:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:06.718 04:46:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.984 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:06.984 fio-3.35 00:30:06.984 Starting 1 thread 00:30:06.984 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.547 00:30:09.547 test: (groupid=0, jobs=1): err= 0: pid=2903053: Sun Jul 14 04:46:29 2024 00:30:09.547 read: IOPS=5955, BW=23.3MiB/s (24.4MB/s)(46.7MiB/2007msec) 00:30:09.547 slat (nsec): min=1964, max=171189, avg=2659.79, stdev=2348.49 00:30:09.547 clat (usec): min=1071, max=171654, avg=11837.78, stdev=11694.68 00:30:09.547 lat (usec): min=1075, max=171695, avg=11840.44, stdev=11695.07 00:30:09.547 clat percentiles (msec): 00:30:09.547 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:09.547 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:09.547 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:30:09.547 | 99.00th=[ 14], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:30:09.547 | 99.99th=[ 171] 00:30:09.547 bw ( KiB/s): min=16528, max=26392, per=99.72%, avg=23754.00, stdev=4822.01, samples=4 00:30:09.547 iops : min= 4132, max= 6598, avg=5938.50, stdev=1205.50, samples=4 00:30:09.547 write: IOPS=5947, BW=23.2MiB/s (24.4MB/s)(46.6MiB/2007msec); 0 zone resets 00:30:09.547 slat (usec): min=2, max=131, avg= 2.73, stdev= 1.73 00:30:09.547 clat (usec): min=418, max=169839, avg=9463.44, stdev=10995.95 00:30:09.547 lat (usec): min=421, max=169847, avg=9466.17, stdev=10996.30 00:30:09.547 clat percentiles (msec): 00:30:09.547 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:09.547 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:09.547 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:09.547 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:30:09.547 | 99.99th=[ 169] 00:30:09.547 bw ( KiB/s): min=17576, max=25888, per=99.91%, avg=23770.00, stdev=4130.02, samples=4 00:30:09.547 iops : min= 4394, max= 6472, avg=5942.50, stdev=1032.51, samples=4 00:30:09.547 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:09.547 lat (msec) : 2=0.03%, 4=0.15%, 10=53.97%, 20=45.31%, 250=0.54% 00:30:09.547 cpu : usr=53.14%, sys=41.48%, ctx=82, majf=0, minf=31 00:30:09.547 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:09.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:09.547 issued rwts: total=11952,11937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.547 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:09.547 00:30:09.547 Run status group 0 (all jobs): 00:30:09.547 READ: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.7MiB (49.0MB), run=2007-2007msec 00:30:09.547 WRITE: bw=23.2MiB/s (24.4MB/s), 23.2MiB/s-23.2MiB/s (24.4MB/s-24.4MB/s), io=46.6MiB (48.9MB), run=2007-2007msec 00:30:09.547 04:46:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:09.547 04:46:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:10.923 04:46:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=fbf5e621-c264-4b94-a220-7b636e38705b 00:30:10.923 04:46:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb fbf5e621-c264-4b94-a220-7b636e38705b 00:30:10.923 04:46:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=fbf5e621-c264-4b94-a220-7b636e38705b 00:30:10.923 04:46:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:10.923 04:46:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:10.923 04:46:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:10.923 04:46:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:10.924 04:46:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:10.924 { 00:30:10.924 "uuid": "d554374c-e09a-4b82-9431-521697057959", 00:30:10.924 "name": "lvs_0", 00:30:10.924 "base_bdev": "Nvme0n1", 00:30:10.924 "total_data_clusters": 930, 00:30:10.924 "free_clusters": 0, 00:30:10.924 "block_size": 512, 00:30:10.924 "cluster_size": 1073741824 00:30:10.924 }, 00:30:10.924 { 00:30:10.924 "uuid": "fbf5e621-c264-4b94-a220-7b636e38705b", 00:30:10.924 "name": "lvs_n_0", 00:30:10.924 "base_bdev": "143f8a4b-70e9-403c-8021-d53b1d8f82e2", 00:30:10.924 "total_data_clusters": 237847, 00:30:10.924 "free_clusters": 237847, 00:30:10.924 "block_size": 512, 00:30:10.924 "cluster_size": 4194304 00:30:10.924 } 00:30:10.924 ]' 00:30:10.924 04:46:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="fbf5e621-c264-4b94-a220-7b636e38705b") .free_clusters' 00:30:10.924 04:46:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:10.924 04:46:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="fbf5e621-c264-4b94-a220-7b636e38705b") .cluster_size' 00:30:10.924 04:46:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:10.924 04:46:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:10.924 04:46:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:10.924 951388 00:30:10.924 04:46:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:11.858 40eee1cc-f40f-4099-9918-df1f0c20d332 00:30:11.858 04:46:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:11.858 04:46:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:12.116 04:46:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.374 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:12.375 04:46:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.633 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:12.633 fio-3.35 00:30:12.633 Starting 1 thread 00:30:12.633 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.177 00:30:15.177 test: (groupid=0, jobs=1): err= 0: pid=2903897: Sun Jul 14 04:46:35 2024 00:30:15.177 read: IOPS=5866, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec) 00:30:15.177 slat (nsec): min=1906, max=163206, avg=2582.83, stdev=2436.19 00:30:15.177 clat (usec): min=4545, max=20724, avg=12063.15, stdev=981.53 00:30:15.177 lat (usec): min=4554, max=20726, avg=12065.73, stdev=981.39 00:30:15.177 clat percentiles (usec): 00:30:15.177 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:30:15.178 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:30:15.178 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:30:15.178 | 99.00th=[14222], 99.50th=[14615], 99.90th=[18220], 99.95th=[19530], 00:30:15.178 | 99.99th=[20579] 00:30:15.178 bw ( KiB/s): min=22104, max=23944, per=99.88%, avg=23436.00, stdev=892.60, samples=4 00:30:15.178 iops : min= 5526, max= 5986, avg=5859.00, stdev=223.15, samples=4 00:30:15.178 write: IOPS=5857, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec); 0 zone resets 00:30:15.178 slat (usec): min=2, max=137, avg= 2.69, stdev= 1.86 00:30:15.178 clat (usec): min=2337, max=17818, avg=9574.74, stdev=878.70 00:30:15.178 lat (usec): min=2344, max=17821, avg=9577.43, stdev=878.62 00:30:15.178 clat percentiles (usec): 00:30:15.178 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:30:15.178 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:30:15.178 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 00:30:15.178 | 99.00th=[11469], 99.50th=[11994], 99.90th=[15270], 99.95th=[16712], 00:30:15.178 | 99.99th=[17695] 00:30:15.178 bw ( KiB/s): min=23128, max=23616, per=99.94%, avg=23414.00, stdev=206.70, samples=4 00:30:15.178 iops : min= 5782, max= 5904, avg=5853.50, stdev=51.68, samples=4 00:30:15.178 lat (msec) : 4=0.05%, 10=36.04%, 20=63.90%, 50=0.01% 00:30:15.178 cpu : usr=56.23%, sys=38.15%, ctx=133, majf=0, minf=31 00:30:15.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:15.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:15.178 issued rwts: total=11785,11767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:15.178 00:30:15.178 Run status group 0 (all jobs): 00:30:15.178 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.3MB), run=2009-2009msec 00:30:15.178 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.2MB), run=2009-2009msec 00:30:15.178 04:46:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:15.178 04:46:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:15.178 04:46:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:19.369 04:46:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:19.369 04:46:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:22.703 04:46:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:22.703 04:46:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:24.603 rmmod nvme_tcp 00:30:24.603 rmmod nvme_fabrics 00:30:24.603 rmmod nvme_keyring 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2901080 ']' 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2901080 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 2901080 ']' 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 2901080 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2901080 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2901080' 00:30:24.603 killing process with pid 2901080 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 2901080 00:30:24.603 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 2901080 00:30:24.862 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:24.862 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:24.862 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:24.862 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.862 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:24.862 04:46:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.862 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:24.863 04:46:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.764 04:46:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:26.764 00:30:26.764 real 0m36.932s 00:30:26.764 user 2m21.414s 00:30:26.764 sys 0m6.911s 00:30:26.764 04:46:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:26.764 04:46:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.764 ************************************ 00:30:26.764 END TEST nvmf_fio_host 00:30:26.764 ************************************ 00:30:26.764 04:46:46 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:26.764 04:46:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:26.764 04:46:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:26.764 04:46:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.022 ************************************ 00:30:27.022 START TEST nvmf_failover 00:30:27.022 ************************************ 00:30:27.022 04:46:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:27.022 * Looking for test storage... 00:30:27.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.022 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:27.023 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:27.023 04:46:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:27.023 04:46:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:28.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:28.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:28.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:28.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:28.924 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:29.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:30:29.184 00:30:29.184 --- 10.0.0.2 ping statistics --- 00:30:29.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.184 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:30:29.184 00:30:29.184 --- 10.0.0.1 ping statistics --- 00:30:29.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.184 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2907139 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2907139 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2907139 ']' 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:29.184 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:29.184 [2024-07-14 04:46:49.241266] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:29.184 [2024-07-14 04:46:49.241349] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.184 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.184 [2024-07-14 04:46:49.315347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:29.443 [2024-07-14 04:46:49.410395] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.443 [2024-07-14 04:46:49.410471] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.443 [2024-07-14 04:46:49.410497] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.443 [2024-07-14 04:46:49.410518] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.443 [2024-07-14 04:46:49.410537] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.443 [2024-07-14 04:46:49.410650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.443 [2024-07-14 04:46:49.410764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.443 [2024-07-14 04:46:49.410772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.443 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:29.443 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:29.443 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:29.443 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:29.443 04:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:29.443 04:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.443 04:46:49 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:29.701 [2024-07-14 04:46:49.764040] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.701 04:46:49 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:29.960 Malloc0 00:30:29.960 04:46:50 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:30.219 04:46:50 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:30.477 04:46:50 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.735 [2024-07-14 04:46:50.784503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.735 04:46:50 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:30.993 [2024-07-14 04:46:51.033317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:30.993 04:46:51 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:31.252 [2024-07-14 04:46:51.282120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2907427 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2907427 /var/tmp/bdevperf.sock 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2907427 ']' 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:31.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:31.252 04:46:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:31.510 04:46:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:31.510 04:46:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:31.510 04:46:51 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.769 NVMe0n1 00:30:31.769 04:46:51 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.336 00:30:32.336 04:46:52 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2907568 00:30:32.336 04:46:52 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:32.336 04:46:52 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:33.274 04:46:53 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.531 [2024-07-14 04:46:53.671584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.531 [2024-07-14 04:46:53.671790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.532 [2024-07-14 04:46:53.671801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4b090 is same with the state(5) to be set 00:30:33.532 04:46:53 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:36.816 04:46:56 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.075 00:30:37.075 04:46:57 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:37.343 [2024-07-14 04:46:57.448719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.448997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.343 [2024-07-14 04:46:57.449913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.449939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.449960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.449981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 [2024-07-14 04:46:57.450695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c610 is same with the state(5) to be set 00:30:37.344 04:46:57 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:40.636 04:47:00 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.636 [2024-07-14 04:47:00.690696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.636 04:47:00 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:41.572 04:47:01 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:41.829 [2024-07-14 04:47:01.945976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4c980 is same with the state(5) to be set 00:30:41.829 04:47:01 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2907568 00:30:48.416 0 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2907427 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2907427 ']' 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2907427 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2907427 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2907427' 00:30:48.416 killing process with pid 2907427 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2907427 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2907427 00:30:48.416 04:47:07 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:48.416 [2024-07-14 04:46:51.343318] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:48.417 [2024-07-14 04:46:51.343396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907427 ] 00:30:48.417 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.417 [2024-07-14 04:46:51.404764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.417 [2024-07-14 04:46:51.491268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.417 Running I/O for 15 seconds... 00:30:48.417 [2024-07-14 04:46:53.672271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.417 [2024-07-14 04:46:53.672313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.672974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.672988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.673003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.673020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.673035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.673049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.673063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.673076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.673091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.673104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.673119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.673133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.673147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.673160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.673174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.673187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.673202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.673215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.417 [2024-07-14 04:46:53.673229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.417 [2024-07-14 04:46:53.673242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.673975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.673990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.674003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.674018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.674032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.674046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.674060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.674075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.674089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.674104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.674122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.418 [2024-07-14 04:46:53.674137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.418 [2024-07-14 04:46:53.674151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.674980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.674996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.675010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.675025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.675040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.675055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.675069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.675084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.675098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.675113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.419 [2024-07-14 04:46:53.675128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.419 [2024-07-14 04:46:53.675144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.420 [2024-07-14 04:46:53.675679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.675982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.675996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.676012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.676026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.676041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.676059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.676075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.676089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.676105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.420 [2024-07-14 04:46:53.676119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.420 [2024-07-14 04:46:53.676149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.421 [2024-07-14 04:46:53.676164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.421 [2024-07-14 04:46:53.676191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82232 len:8 PRP1 0x0 PRP2 0x0 00:30:48.421 [2024-07-14 04:46:53.676204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:53.676265] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2183f20 was disconnected and freed. reset controller. 00:30:48.421 [2024-07-14 04:46:53.676282] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:48.421 [2024-07-14 04:46:53.676315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.421 [2024-07-14 04:46:53.676349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:53.676364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.421 [2024-07-14 04:46:53.676382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:53.676405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.421 [2024-07-14 04:46:53.676428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:53.676451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.421 [2024-07-14 04:46:53.676473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:53.676497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.421 [2024-07-14 04:46:53.676559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2164740 (9): Bad file descriptor 00:30:48.421 [2024-07-14 04:46:53.679854] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.421 [2024-07-14 04:46:53.712100] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.421 [2024-07-14 04:46:57.448737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.421 [2024-07-14 04:46:57.448781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.448801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.421 [2024-07-14 04:46:57.448815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.448838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.421 [2024-07-14 04:46:57.448862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.448885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.421 [2024-07-14 04:46:57.448899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.448912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164740 is same with the state(5) to be set 00:30:48.421 [2024-07-14 04:46:57.450954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.450981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.421 [2024-07-14 04:46:57.451794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.421 [2024-07-14 04:46:57.451808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.451822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.451838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.451852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.451895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.451913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.451930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.451945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.451960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.451976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.451992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.422 [2024-07-14 04:46:57.452316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.422 [2024-07-14 04:46:57.452346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.422 [2024-07-14 04:46:57.452376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.422 [2024-07-14 04:46:57.452405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.422 [2024-07-14 04:46:57.452435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.422 [2024-07-14 04:46:57.452953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.422 [2024-07-14 04:46:57.452969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.452983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.452999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.423 [2024-07-14 04:46:57.453407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.453974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.453989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.454004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.454018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.454034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.454048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.454067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.454082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.454097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.454112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.454127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.454142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.454157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.454181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.454197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.454211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.454226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.423 [2024-07-14 04:46:57.454242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.423 [2024-07-14 04:46:57.454258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:46:57.454946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.454977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.424 [2024-07-14 04:46:57.454993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.424 [2024-07-14 04:46:57.455006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100944 len:8 PRP1 0x0 PRP2 0x0 00:30:48.424 [2024-07-14 04:46:57.455019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:46:57.455082] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2186160 was disconnected and freed. reset controller. 00:30:48.424 [2024-07-14 04:46:57.455100] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:48.424 [2024-07-14 04:46:57.455116] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.424 [2024-07-14 04:46:57.458396] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.424 [2024-07-14 04:46:57.458435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2164740 (9): Bad file descriptor 00:30:48.424 [2024-07-14 04:46:57.617394] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.424 [2024-07-14 04:47:01.948218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.424 [2024-07-14 04:47:01.948697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.424 [2024-07-14 04:47:01.948711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.948726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.948739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.948754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.948771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.948786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.948800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.948814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.948828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.948843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.948860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.948898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.948914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.948929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.948959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.948975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.948989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.949983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.949998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.950016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.950033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.950047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.425 [2024-07-14 04:47:01.950063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.425 [2024-07-14 04:47:01.950077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.426 [2024-07-14 04:47:01.950107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.426 [2024-07-14 04:47:01.950137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.426 [2024-07-14 04:47:01.950173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.426 [2024-07-14 04:47:01.950219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.426 [2024-07-14 04:47:01.950248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.426 [2024-07-14 04:47:01.950277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.426 [2024-07-14 04:47:01.950324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67672 len:8 PRP1 0x0 PRP2 0x0 00:30:48.426 [2024-07-14 04:47:01.950338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.426 [2024-07-14 04:47:01.950433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.426 [2024-07-14 04:47:01.950463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.426 [2024-07-14 04:47:01.950494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.426 [2024-07-14 04:47:01.950509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.427 [2024-07-14 04:47:01.950523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.950536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2164740 is same with the state(5) to be set 00:30:48.427 [2024-07-14 04:47:01.950690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.950710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.950723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67680 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.950736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.950753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.950765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.950777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67688 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.950790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.950804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.950816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.950828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67696 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.950841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.950878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.950892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.950903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67704 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.950931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.950945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.950957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.950969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67712 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.950982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.950996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67720 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67728 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67736 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67744 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67752 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67760 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67768 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67776 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67784 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67792 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67800 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67808 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67816 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67824 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67832 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67840 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.427 [2024-07-14 04:47:01.951796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.427 [2024-07-14 04:47:01.951807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.427 [2024-07-14 04:47:01.951819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67848 len:8 PRP1 0x0 PRP2 0x0 00:30:48.427 [2024-07-14 04:47:01.951831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.951856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.951895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.951909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67856 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.951923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.951937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.951955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.951968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67864 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.951980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.951994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67872 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67880 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67888 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67896 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67904 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67912 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67920 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67928 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67936 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67944 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67952 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67960 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67968 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67976 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67984 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67992 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68000 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68008 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.952958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.952969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68016 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.952982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.952995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.953007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.953018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68024 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.953031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.953045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.953056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.953068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68032 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.953081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.953099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.953111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.953122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68040 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.953135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.428 [2024-07-14 04:47:01.953157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.428 [2024-07-14 04:47:01.953182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.428 [2024-07-14 04:47:01.953194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67032 len:8 PRP1 0x0 PRP2 0x0 00:30:48.428 [2024-07-14 04:47:01.953207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67040 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67048 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67056 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67064 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67072 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67080 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67088 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67096 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67104 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67112 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67120 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67128 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67136 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67144 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.953953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.953965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.953976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68048 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.953989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.954002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.954014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.954025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67152 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.954038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.954057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.954070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.954081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67160 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.954094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.954108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.954119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.954131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67168 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.954144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.954168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.954195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.954206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67176 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.954218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.954232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.954243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.954254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67184 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.954266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.954279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.954290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.963100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67192 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.963131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.963148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.963166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.963178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67200 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.963192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.963205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.963232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.963243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67208 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.963255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.963268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.963280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.963290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67216 len:8 PRP1 0x0 PRP2 0x0 00:30:48.429 [2024-07-14 04:47:01.963303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.429 [2024-07-14 04:47:01.963316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.429 [2024-07-14 04:47:01.963327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.429 [2024-07-14 04:47:01.963338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67224 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67232 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67240 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67248 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67256 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67264 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67272 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67280 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67288 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67296 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67304 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.963856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67312 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.963959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.963980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.963993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67320 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67328 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67336 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67344 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67352 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67360 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67368 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67376 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67384 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67392 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67400 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67408 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.430 [2024-07-14 04:47:01.964603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67416 len:8 PRP1 0x0 PRP2 0x0 00:30:48.430 [2024-07-14 04:47:01.964615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.430 [2024-07-14 04:47:01.964628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.430 [2024-07-14 04:47:01.964638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.964649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67424 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.964661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.964674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.964684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.964695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67432 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.964707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.964720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.964731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.964742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67440 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.964754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.964766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.964780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.964792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67448 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.964804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.964816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.964827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.964838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67456 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.964871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.964887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.964898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.964909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67464 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.964922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.964935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.964947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.964958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67472 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.964970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.964984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.964995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67480 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67488 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67496 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67504 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67512 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67520 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67528 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67536 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67544 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67552 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67560 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67568 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67576 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.431 [2024-07-14 04:47:01.965644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.431 [2024-07-14 04:47:01.965655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67584 len:8 PRP1 0x0 PRP2 0x0 00:30:48.431 [2024-07-14 04:47:01.965667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.431 [2024-07-14 04:47:01.965680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.965691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.965702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67592 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.965714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.965730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.965741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.965753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67600 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.965765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.965778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.965793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.965804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67608 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.965816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.965829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.965840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.965860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67616 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.965898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.965913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.965925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.965936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67624 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.965948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.965961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.965976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.965988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67632 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.966001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.966014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.966025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.966036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67640 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.966049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.966067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.966079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.966090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67648 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.966103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.966116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.966127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.966139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67656 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.966160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.966173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.966199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.966211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67664 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.966224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.966238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.432 [2024-07-14 04:47:01.966248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.432 [2024-07-14 04:47:01.966259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67672 len:8 PRP1 0x0 PRP2 0x0 00:30:48.432 [2024-07-14 04:47:01.966270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.432 [2024-07-14 04:47:01.966330] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2188020 was disconnected and freed. reset controller. 00:30:48.432 [2024-07-14 04:47:01.966346] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:48.432 [2024-07-14 04:47:01.966361] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.432 [2024-07-14 04:47:01.966411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2164740 (9): Bad file descriptor 00:30:48.432 [2024-07-14 04:47:01.969666] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.432 [2024-07-14 04:47:02.044148] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.432 00:30:48.432 Latency(us) 00:30:48.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.432 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:48.432 Verification LBA range: start 0x0 length 0x4000 00:30:48.432 NVMe0n1 : 15.01 8850.68 34.57 678.76 0.00 13403.67 849.54 24272.59 00:30:48.432 =================================================================================================================== 00:30:48.432 Total : 8850.68 34.57 678.76 0.00 13403.67 849.54 24272.59 00:30:48.432 Received shutdown signal, test time was about 15.000000 seconds 00:30:48.432 00:30:48.432 Latency(us) 00:30:48.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.432 =================================================================================================================== 00:30:48.432 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2909292 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2909292 /var/tmp/bdevperf.sock 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2909292 ']' 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:48.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:48.432 04:47:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:48.432 04:47:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:48.432 04:47:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:48.432 04:47:08 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:48.432 [2024-07-14 04:47:08.346218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:48.432 04:47:08 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:48.432 [2024-07-14 04:47:08.586887] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:48.691 04:47:08 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.949 NVMe0n1 00:30:48.949 04:47:09 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.518 00:30:49.518 04:47:09 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.777 00:30:49.777 04:47:09 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:49.777 04:47:09 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:50.035 04:47:10 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.293 04:47:10 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:53.584 04:47:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.584 04:47:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:53.584 04:47:13 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2909975 00:30:53.584 04:47:13 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:53.584 04:47:13 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2909975 00:30:54.959 0 00:30:54.959 04:47:14 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:54.959 [2024-07-14 04:47:07.874818] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:54.959 [2024-07-14 04:47:07.874924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909292 ] 00:30:54.959 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.960 [2024-07-14 04:47:07.934570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.960 [2024-07-14 04:47:08.017519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.960 [2024-07-14 04:47:10.377263] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:54.960 [2024-07-14 04:47:10.377382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.960 [2024-07-14 04:47:10.377406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.960 [2024-07-14 04:47:10.377425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.960 [2024-07-14 04:47:10.377438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.960 [2024-07-14 04:47:10.377451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.960 [2024-07-14 04:47:10.377465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.960 [2024-07-14 04:47:10.377479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.960 [2024-07-14 04:47:10.377507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.960 [2024-07-14 04:47:10.377522] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.960 [2024-07-14 04:47:10.377572] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.960 [2024-07-14 04:47:10.377607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a34740 (9): Bad file descriptor 00:30:54.960 [2024-07-14 04:47:10.472099] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:54.960 Running I/O for 1 seconds... 00:30:54.960 00:30:54.960 Latency(us) 00:30:54.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.960 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:54.960 Verification LBA range: start 0x0 length 0x4000 00:30:54.960 NVMe0n1 : 1.01 8822.47 34.46 0.00 0.00 14448.57 2815.62 15146.10 00:30:54.960 =================================================================================================================== 00:30:54.960 Total : 8822.47 34.46 0.00 0.00 14448.57 2815.62 15146.10 00:30:54.960 04:47:14 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:54.960 04:47:14 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:54.960 04:47:15 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.217 04:47:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.217 04:47:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:55.475 04:47:15 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.733 04:47:15 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:59.045 04:47:18 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:59.045 04:47:18 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2909292 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2909292 ']' 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2909292 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2909292 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2909292' 00:30:59.045 killing process with pid 2909292 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2909292 00:30:59.045 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2909292 00:30:59.304 04:47:19 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:59.304 04:47:19 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:59.563 04:47:19 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:59.563 04:47:19 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:59.563 04:47:19 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:59.563 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:59.563 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:59.563 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:59.563 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:59.563 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:59.563 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:59.563 rmmod nvme_tcp 00:30:59.563 rmmod nvme_fabrics 00:30:59.822 rmmod nvme_keyring 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2907139 ']' 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2907139 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2907139 ']' 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2907139 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2907139 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2907139' 00:30:59.822 killing process with pid 2907139 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2907139 00:30:59.822 04:47:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2907139 00:31:00.080 04:47:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:00.080 04:47:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:00.080 04:47:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:00.080 04:47:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:00.080 04:47:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:00.080 04:47:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.080 04:47:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.080 04:47:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.018 04:47:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:02.018 00:31:02.018 real 0m35.165s 00:31:02.018 user 2m4.058s 00:31:02.018 sys 0m5.891s 00:31:02.019 04:47:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:02.019 04:47:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:02.019 ************************************ 00:31:02.019 END TEST nvmf_failover 00:31:02.019 ************************************ 00:31:02.019 04:47:22 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:02.019 04:47:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:02.019 04:47:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:02.019 04:47:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:02.019 ************************************ 00:31:02.019 START TEST nvmf_host_discovery 00:31:02.019 ************************************ 00:31:02.019 04:47:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:02.275 * Looking for test storage... 00:31:02.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.275 04:47:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:02.276 04:47:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:04.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:04.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:04.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:04.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.177 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:04.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:31:04.178 00:31:04.178 --- 10.0.0.2 ping statistics --- 00:31:04.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.178 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:31:04.178 00:31:04.178 --- 10.0.0.1 ping statistics --- 00:31:04.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.178 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:04.178 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2912678 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2912678 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2912678 ']' 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:04.437 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.437 [2024-07-14 04:47:24.439455] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:04.437 [2024-07-14 04:47:24.439540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.437 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.437 [2024-07-14 04:47:24.503101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.437 [2024-07-14 04:47:24.586165] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.437 [2024-07-14 04:47:24.586217] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.437 [2024-07-14 04:47:24.586246] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.437 [2024-07-14 04:47:24.586256] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.437 [2024-07-14 04:47:24.586266] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.437 [2024-07-14 04:47:24.586292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.696 [2024-07-14 04:47:24.728335] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.696 [2024-07-14 04:47:24.736496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.696 null0 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.696 null1 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2912705 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2912705 /tmp/host.sock 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2912705 ']' 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:04.696 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:04.696 04:47:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.696 [2024-07-14 04:47:24.811924] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:04.696 [2024-07-14 04:47:24.812005] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912705 ] 00:31:04.696 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.696 [2024-07-14 04:47:24.872468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.954 [2024-07-14 04:47:24.958442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:04.954 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.211 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.212 [2024-07-14 04:47:25.370213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.212 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:05.471 04:47:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:06.037 [2024-07-14 04:47:26.136862] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:06.037 [2024-07-14 04:47:26.136901] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:06.037 [2024-07-14 04:47:26.136941] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:06.295 [2024-07-14 04:47:26.265382] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:06.295 [2024-07-14 04:47:26.327097] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:06.295 [2024-07-14 04:47:26.327119] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:06.552 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.553 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.811 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.811 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:06.811 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.811 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:06.811 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.812 [2024-07-14 04:47:26.987039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:06.812 [2024-07-14 04:47:26.987736] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:06.812 [2024-07-14 04:47:26.987773] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.812 04:47:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.812 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:07.071 04:47:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:07.071 [2024-07-14 04:47:27.116618] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:07.332 [2024-07-14 04:47:27.381954] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:07.332 [2024-07-14 04:47:27.381977] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:07.332 [2024-07-14 04:47:27.381986] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.270 [2024-07-14 04:47:28.199572] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:08.270 [2024-07-14 04:47:28.199605] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.270 [2024-07-14 04:47:28.206752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.270 [2024-07-14 04:47:28.206785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.270 [2024-07-14 04:47:28.206802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.270 [2024-07-14 04:47:28.206816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.270 [2024-07-14 04:47:28.206829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.270 [2024-07-14 04:47:28.206842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.270 [2024-07-14 04:47:28.206864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.270 [2024-07-14 04:47:28.206889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.270 [2024-07-14 04:47:28.206903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1da0 is same with the state(5) to be set 00:31:08.270 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.270 [2024-07-14 04:47:28.216744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f1da0 (9): Bad file descriptor 00:31:08.271 [2024-07-14 04:47:28.226785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.271 [2024-07-14 04:47:28.227079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.271 [2024-07-14 04:47:28.227108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f1da0 with addr=10.0.0.2, port=4420 00:31:08.271 [2024-07-14 04:47:28.227125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1da0 is same with the state(5) to be set 00:31:08.271 [2024-07-14 04:47:28.227147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f1da0 (9): Bad file descriptor 00:31:08.271 [2024-07-14 04:47:28.227178] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.271 [2024-07-14 04:47:28.227192] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.271 [2024-07-14 04:47:28.227208] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.271 [2024-07-14 04:47:28.227232] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.271 [2024-07-14 04:47:28.236880] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.271 [2024-07-14 04:47:28.237112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.271 [2024-07-14 04:47:28.237139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f1da0 with addr=10.0.0.2, port=4420 00:31:08.271 [2024-07-14 04:47:28.237166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1da0 is same with the state(5) to be set 00:31:08.271 [2024-07-14 04:47:28.237188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f1da0 (9): Bad file descriptor 00:31:08.271 [2024-07-14 04:47:28.237209] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.271 [2024-07-14 04:47:28.237223] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.271 [2024-07-14 04:47:28.237236] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.271 [2024-07-14 04:47:28.237255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.271 [2024-07-14 04:47:28.246965] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.271 [2024-07-14 04:47:28.247170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.271 [2024-07-14 04:47:28.247201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f1da0 with addr=10.0.0.2, port=4420 00:31:08.271 [2024-07-14 04:47:28.247219] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1da0 is same with the state(5) to be set 00:31:08.271 [2024-07-14 04:47:28.247244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f1da0 (9): Bad file descriptor 00:31:08.271 [2024-07-14 04:47:28.247266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.271 [2024-07-14 04:47:28.247281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.271 [2024-07-14 04:47:28.247296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.271 [2024-07-14 04:47:28.247317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.271 [2024-07-14 04:47:28.257042] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.271 [2024-07-14 04:47:28.257274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.271 [2024-07-14 04:47:28.257306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f1da0 with addr=10.0.0.2, port=4420 00:31:08.271 [2024-07-14 04:47:28.257323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1da0 is same with the state(5) to be set 00:31:08.271 [2024-07-14 04:47:28.257348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f1da0 (9): Bad file descriptor 00:31:08.271 [2024-07-14 04:47:28.257384] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.271 [2024-07-14 04:47:28.257404] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.271 [2024-07-14 04:47:28.257419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.271 [2024-07-14 04:47:28.257439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.271 [2024-07-14 04:47:28.267120] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.271 [2024-07-14 04:47:28.267362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.271 [2024-07-14 04:47:28.267390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f1da0 with addr=10.0.0.2, port=4420 00:31:08.271 [2024-07-14 04:47:28.267405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1da0 is same with the state(5) to be set 00:31:08.271 [2024-07-14 04:47:28.267426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f1da0 (9): Bad file descriptor 00:31:08.271 [2024-07-14 04:47:28.267473] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.271 [2024-07-14 04:47:28.267491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.271 [2024-07-14 04:47:28.267505] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.271 [2024-07-14 04:47:28.267524] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.271 [2024-07-14 04:47:28.277210] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.271 [2024-07-14 04:47:28.277445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.271 [2024-07-14 04:47:28.277472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f1da0 with addr=10.0.0.2, port=4420 00:31:08.271 [2024-07-14 04:47:28.277489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1da0 is same with the state(5) to be set 00:31:08.271 [2024-07-14 04:47:28.277510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f1da0 (9): Bad file descriptor 00:31:08.271 [2024-07-14 04:47:28.277542] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.271 [2024-07-14 04:47:28.277560] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.271 [2024-07-14 04:47:28.277573] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.271 [2024-07-14 04:47:28.277592] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:08.271 [2024-07-14 04:47:28.286564] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:08.271 [2024-07-14 04:47:28.286597] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:08.271 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:08.272 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.530 04:47:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.466 [2024-07-14 04:47:29.556072] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:09.466 [2024-07-14 04:47:29.556098] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:09.466 [2024-07-14 04:47:29.556119] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:09.466 [2024-07-14 04:47:29.642398] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:09.726 [2024-07-14 04:47:29.913469] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:09.726 [2024-07-14 04:47:29.913515] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:09.726 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.988 request: 00:31:09.988 { 00:31:09.988 "name": "nvme", 00:31:09.988 "trtype": "tcp", 00:31:09.988 "traddr": "10.0.0.2", 00:31:09.988 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:09.988 "adrfam": "ipv4", 00:31:09.988 "trsvcid": "8009", 00:31:09.988 "wait_for_attach": true, 00:31:09.988 "method": "bdev_nvme_start_discovery", 00:31:09.988 "req_id": 1 00:31:09.988 } 00:31:09.988 Got JSON-RPC error response 00:31:09.988 response: 00:31:09.988 { 00:31:09.988 "code": -17, 00:31:09.988 "message": "File exists" 00:31:09.988 } 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.988 04:47:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.988 request: 00:31:09.988 { 00:31:09.988 "name": "nvme_second", 00:31:09.988 "trtype": "tcp", 00:31:09.988 "traddr": "10.0.0.2", 00:31:09.988 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:09.988 "adrfam": "ipv4", 00:31:09.988 "trsvcid": "8009", 00:31:09.988 "wait_for_attach": true, 00:31:09.988 "method": "bdev_nvme_start_discovery", 00:31:09.988 "req_id": 1 00:31:09.988 } 00:31:09.988 Got JSON-RPC error response 00:31:09.988 response: 00:31:09.988 { 00:31:09.988 "code": -17, 00:31:09.988 "message": "File exists" 00:31:09.988 } 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.988 04:47:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.365 [2024-07-14 04:47:31.129149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-14 04:47:31.129242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ff970 with addr=10.0.0.2, port=8010 00:31:11.366 [2024-07-14 04:47:31.129278] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:11.366 [2024-07-14 04:47:31.129295] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:11.366 [2024-07-14 04:47:31.129310] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:12.312 [2024-07-14 04:47:32.131488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.312 [2024-07-14 04:47:32.131522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ff970 with addr=10.0.0.2, port=8010 00:31:12.312 [2024-07-14 04:47:32.131543] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:12.312 [2024-07-14 04:47:32.131555] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:12.312 [2024-07-14 04:47:32.131566] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:13.251 [2024-07-14 04:47:33.133663] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:13.251 request: 00:31:13.251 { 00:31:13.251 "name": "nvme_second", 00:31:13.251 "trtype": "tcp", 00:31:13.251 "traddr": "10.0.0.2", 00:31:13.251 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:13.251 "adrfam": "ipv4", 00:31:13.251 "trsvcid": "8010", 00:31:13.251 "attach_timeout_ms": 3000, 00:31:13.251 "method": "bdev_nvme_start_discovery", 00:31:13.251 "req_id": 1 00:31:13.251 } 00:31:13.251 Got JSON-RPC error response 00:31:13.251 response: 00:31:13.251 { 00:31:13.251 "code": -110, 00:31:13.251 "message": "Connection timed out" 00:31:13.251 } 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2912705 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:13.251 rmmod nvme_tcp 00:31:13.251 rmmod nvme_fabrics 00:31:13.251 rmmod nvme_keyring 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2912678 ']' 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2912678 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 2912678 ']' 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 2912678 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2912678 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2912678' 00:31:13.251 killing process with pid 2912678 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 2912678 00:31:13.251 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 2912678 00:31:13.510 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:13.511 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:13.511 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:13.511 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:13.511 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:13.511 04:47:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.511 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:13.511 04:47:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.415 04:47:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:15.415 00:31:15.415 real 0m13.373s 00:31:15.415 user 0m19.293s 00:31:15.415 sys 0m2.859s 00:31:15.415 04:47:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:15.415 04:47:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.415 ************************************ 00:31:15.415 END TEST nvmf_host_discovery 00:31:15.415 ************************************ 00:31:15.415 04:47:35 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:15.415 04:47:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:15.415 04:47:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:15.415 04:47:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:15.415 ************************************ 00:31:15.415 START TEST nvmf_host_multipath_status 00:31:15.415 ************************************ 00:31:15.415 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:15.675 * Looking for test storage... 00:31:15.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:15.675 04:47:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:17.584 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:17.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:17.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:17.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:17.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:17.585 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:17.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:31:17.844 00:31:17.844 --- 10.0.0.2 ping statistics --- 00:31:17.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.844 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:31:17.844 00:31:17.844 --- 10.0.0.1 ping statistics --- 00:31:17.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.844 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2915846 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2915846 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2915846 ']' 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:17.844 04:47:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.844 [2024-07-14 04:47:37.887303] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:17.844 [2024-07-14 04:47:37.887379] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.844 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.844 [2024-07-14 04:47:37.956138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:18.102 [2024-07-14 04:47:38.046099] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.102 [2024-07-14 04:47:38.046157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.102 [2024-07-14 04:47:38.046173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.102 [2024-07-14 04:47:38.046186] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.102 [2024-07-14 04:47:38.046198] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.102 [2024-07-14 04:47:38.046285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.102 [2024-07-14 04:47:38.046292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.102 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:18.102 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:18.102 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:18.102 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:18.102 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:18.102 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.102 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2915846 00:31:18.102 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:18.360 [2024-07-14 04:47:38.468162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.360 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:18.618 Malloc0 00:31:18.619 04:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:18.876 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:19.132 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.390 [2024-07-14 04:47:39.492143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.390 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:19.647 [2024-07-14 04:47:39.732791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:19.647 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2916015 00:31:19.647 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:19.647 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:19.647 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2916015 /var/tmp/bdevperf.sock 00:31:19.647 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2916015 ']' 00:31:19.647 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:19.647 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:19.648 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:19.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:19.648 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:19.648 04:47:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:19.905 04:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:19.905 04:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:19.905 04:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:20.163 04:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:20.730 Nvme0n1 00:31:20.730 04:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:20.987 Nvme0n1 00:31:20.987 04:47:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:20.988 04:47:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:23.520 04:47:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:23.520 04:47:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:23.520 04:47:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:23.520 04:47:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:24.452 04:47:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:24.452 04:47:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:24.452 04:47:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.452 04:47:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:24.710 04:47:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.710 04:47:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:24.710 04:47:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.710 04:47:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:24.967 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:24.967 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:24.967 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.967 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:25.226 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.226 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:25.226 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.226 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:25.512 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.513 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:25.513 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.513 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:25.795 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.795 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:25.795 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.795 04:47:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:26.053 04:47:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.053 04:47:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:26.053 04:47:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:26.311 04:47:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:26.571 04:47:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:27.506 04:47:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:27.506 04:47:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:27.506 04:47:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.506 04:47:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:27.765 04:47:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.765 04:47:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:27.765 04:47:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.765 04:47:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.023 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.023 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.023 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.024 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.282 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.282 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.282 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.282 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.540 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.540 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:28.540 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.540 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:28.799 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.799 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:28.799 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.799 04:47:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.057 04:47:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.057 04:47:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:29.057 04:47:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:29.315 04:47:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:29.572 04:47:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:30.507 04:47:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:30.507 04:47:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:30.507 04:47:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.507 04:47:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.765 04:47:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.765 04:47:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:30.765 04:47:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.765 04:47:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:31.023 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.023 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:31.023 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.023 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.280 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.280 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.280 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.280 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.538 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.538 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:31.538 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.538 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:31.796 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.796 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:31.796 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.796 04:47:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.054 04:47:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.054 04:47:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:32.054 04:47:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:32.311 04:47:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:32.570 04:47:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:33.505 04:47:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:33.505 04:47:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:33.505 04:47:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.505 04:47:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:33.761 04:47:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.761 04:47:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:33.761 04:47:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.761 04:47:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:34.018 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.018 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:34.018 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.018 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.276 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.276 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.276 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.276 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:34.535 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.535 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:34.535 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.535 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:34.794 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.794 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:34.794 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.794 04:47:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:35.052 04:47:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.052 04:47:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:35.052 04:47:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:35.309 04:47:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:35.568 04:47:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:36.502 04:47:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:36.502 04:47:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:36.502 04:47:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.502 04:47:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:36.760 04:47:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.760 04:47:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:36.760 04:47:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.760 04:47:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:37.018 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.018 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:37.018 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.018 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:37.276 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.276 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:37.276 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.276 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:37.535 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.535 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:37.535 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.535 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:37.806 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.806 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:37.806 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.806 04:47:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:38.064 04:47:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.064 04:47:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:38.064 04:47:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:38.322 04:47:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:38.580 04:47:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:39.513 04:47:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:39.513 04:47:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:39.513 04:47:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.513 04:47:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:39.771 04:47:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.771 04:47:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:39.771 04:47:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.771 04:47:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:40.029 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.029 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:40.029 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.029 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:40.286 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.286 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:40.286 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.286 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:40.544 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.544 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:40.544 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.544 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:40.802 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.802 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:40.802 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.802 04:48:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.060 04:48:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.060 04:48:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:41.319 04:48:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:41.319 04:48:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:41.578 04:48:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:41.835 04:48:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:43.218 04:48:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:43.218 04:48:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:43.218 04:48:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.218 04:48:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.218 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.218 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:43.218 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.218 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:43.476 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.476 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:43.476 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.476 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:43.733 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.733 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:43.733 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.733 04:48:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:43.991 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.991 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:43.991 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.991 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.249 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.249 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:44.249 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.249 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:44.506 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.506 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:44.506 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:44.764 04:48:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:45.022 04:48:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:45.954 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:45.954 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:45.954 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.954 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:46.211 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:46.212 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:46.212 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.212 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:46.469 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.469 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:46.469 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.469 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:46.727 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.727 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:46.727 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.727 04:48:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:46.984 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.984 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:46.984 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.985 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:47.242 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.242 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:47.242 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.242 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:47.500 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.500 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:47.500 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:47.757 04:48:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:48.015 04:48:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:48.963 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:48.963 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:48.963 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.963 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:49.220 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.221 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:49.221 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.221 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:49.478 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.478 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:49.478 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.478 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:49.736 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.736 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:49.736 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.736 04:48:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:49.992 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.992 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:49.993 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.993 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:50.249 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.249 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:50.249 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.249 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:50.506 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.506 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:50.506 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:50.763 04:48:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:51.021 04:48:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:51.954 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:51.954 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:51.954 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.954 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:52.212 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.212 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:52.212 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.212 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:52.470 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:52.470 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:52.470 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.470 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:52.729 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.729 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:52.729 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.729 04:48:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:52.987 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.987 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:52.987 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.987 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:53.256 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.256 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:53.256 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.256 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2916015 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2916015 ']' 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2916015 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2916015 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2916015' 00:31:53.544 killing process with pid 2916015 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2916015 00:31:53.544 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2916015 00:31:53.811 Connection closed with partial response: 00:31:53.811 00:31:53.811 00:31:53.811 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2916015 00:31:53.811 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:53.811 [2024-07-14 04:47:39.794429] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:53.812 [2024-07-14 04:47:39.794510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916015 ] 00:31:53.812 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.812 [2024-07-14 04:47:39.859327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.812 [2024-07-14 04:47:39.947894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.812 Running I/O for 90 seconds... 00:31:53.812 [2024-07-14 04:47:55.377195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.377967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.377989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.812 [2024-07-14 04:47:55.378157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.812 [2024-07-14 04:47:55.378216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.812 [2024-07-14 04:47:55.378253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.812 [2024-07-14 04:47:55.378623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:53.812 [2024-07-14 04:47:55.378646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.378662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.378685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.378701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.378724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.378740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.378784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.378801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.378824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.378839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.378885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.378903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.378943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.378960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.378984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.379960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.379991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.813 [2024-07-14 04:47:55.380436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:53.813 [2024-07-14 04:47:55.380463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.380479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.380505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.380522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.380548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.380565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.380596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.380613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.380640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.380657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.380754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.380776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.380808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.380827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.380856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.380881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.380911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.380927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.380956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.380972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.381150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.381210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.381260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.381303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.814 [2024-07-14 04:47:55.381346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.381967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.381983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.382011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.382028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.382055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.382072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.382099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.382116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.382143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.382160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.382187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.382203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:53.814 [2024-07-14 04:47:55.382247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.814 [2024-07-14 04:47:55.382263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.382979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.382995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.383023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.383040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.383067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.383084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.383112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.383128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.383188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:47:55.383216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:47:55.383233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.067538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.067595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.067656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.067678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.067703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.067721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.067744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.067761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.067783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.067806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.067838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.067855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.067887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.067905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.067927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.067944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.067966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.067982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.068229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.815 [2024-07-14 04:48:11.068253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.068281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.068299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.068321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.068338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.068361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.068377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.068399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.068415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.068436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.068459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.068482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.068499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.068521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.068537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.071827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.815 [2024-07-14 04:48:11.071859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:53.815 [2024-07-14 04:48:11.071900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.071920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.071943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.071959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.071981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.071997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.816 [2024-07-14 04:48:11.072035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:53.816 [2024-07-14 04:48:11.072407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.816 [2024-07-14 04:48:11.072423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:53.816 Received shutdown signal, test time was about 32.370136 seconds 00:31:53.816 00:31:53.816 Latency(us) 00:31:53.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.816 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:53.816 Verification LBA range: start 0x0 length 0x4000 00:31:53.816 Nvme0n1 : 32.37 7979.12 31.17 0.00 0.00 16015.82 317.06 4026531.84 00:31:53.816 =================================================================================================================== 00:31:53.816 Total : 7979.12 31.17 0.00 0.00 16015.82 317.06 4026531.84 00:31:53.816 04:48:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:54.074 rmmod nvme_tcp 00:31:54.074 rmmod nvme_fabrics 00:31:54.074 rmmod nvme_keyring 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2915846 ']' 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2915846 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2915846 ']' 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2915846 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2915846 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2915846' 00:31:54.074 killing process with pid 2915846 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2915846 00:31:54.074 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2915846 00:31:54.332 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:54.332 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:54.332 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:54.332 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:54.332 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:54.332 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.332 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:54.332 04:48:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.862 04:48:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:56.862 00:31:56.862 real 0m40.917s 00:31:56.862 user 2m1.948s 00:31:56.862 sys 0m11.143s 00:31:56.862 04:48:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:56.862 04:48:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:56.862 ************************************ 00:31:56.862 END TEST nvmf_host_multipath_status 00:31:56.862 ************************************ 00:31:56.862 04:48:16 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:56.862 04:48:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:56.862 04:48:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:56.862 04:48:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:56.862 ************************************ 00:31:56.862 START TEST nvmf_discovery_remove_ifc 00:31:56.862 ************************************ 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:56.862 * Looking for test storage... 00:31:56.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.862 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:56.863 04:48:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:58.238 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:58.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:58.239 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:58.239 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:58.239 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.239 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:58.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:31:58.498 00:31:58.498 --- 10.0.0.2 ping statistics --- 00:31:58.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.498 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:31:58.498 00:31:58.498 --- 10.0.0.1 ping statistics --- 00:31:58.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.498 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2922811 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2922811 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2922811 ']' 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:58.498 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.498 [2024-07-14 04:48:18.623059] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:58.498 [2024-07-14 04:48:18.623167] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:58.498 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.756 [2024-07-14 04:48:18.691568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.756 [2024-07-14 04:48:18.782992] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:58.756 [2024-07-14 04:48:18.783047] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:58.756 [2024-07-14 04:48:18.783078] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:58.756 [2024-07-14 04:48:18.783090] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:58.756 [2024-07-14 04:48:18.783100] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:58.756 [2024-07-14 04:48:18.783127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.756 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:58.756 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:58.756 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:58.756 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:58.756 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.756 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:58.756 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:58.756 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.756 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.756 [2024-07-14 04:48:18.929650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.756 [2024-07-14 04:48:18.937817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:59.015 null0 00:31:59.015 [2024-07-14 04:48:18.969777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2922838 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2922838 /tmp/host.sock 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2922838 ']' 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:59.015 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:59.015 04:48:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.015 [2024-07-14 04:48:19.034218] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:59.015 [2024-07-14 04:48:19.034296] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922838 ] 00:31:59.015 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.015 [2024-07-14 04:48:19.095287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.015 [2024-07-14 04:48:19.180986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.274 04:48:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.208 [2024-07-14 04:48:20.398077] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:00.208 [2024-07-14 04:48:20.398116] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:00.208 [2024-07-14 04:48:20.398141] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:00.466 [2024-07-14 04:48:20.485412] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:00.467 [2024-07-14 04:48:20.589613] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:00.467 [2024-07-14 04:48:20.589692] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:00.467 [2024-07-14 04:48:20.589743] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:00.467 [2024-07-14 04:48:20.589771] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:00.467 [2024-07-14 04:48:20.589807] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:00.467 [2024-07-14 04:48:20.596341] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2414900 was disconnected and freed. delete nvme_qpair. 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:00.467 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:00.729 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:00.729 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:00.729 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.729 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:00.729 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.729 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.730 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:00.730 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:00.730 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.730 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:00.730 04:48:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:01.659 04:48:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.027 04:48:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.957 04:48:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.889 04:48:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.821 04:48:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.079 [2024-07-14 04:48:26.030484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:06.079 [2024-07-14 04:48:26.030551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.079 [2024-07-14 04:48:26.030575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.079 [2024-07-14 04:48:26.030593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.079 [2024-07-14 04:48:26.030616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.079 [2024-07-14 04:48:26.030632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.079 [2024-07-14 04:48:26.030646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.079 [2024-07-14 04:48:26.030661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.079 [2024-07-14 04:48:26.030676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.079 [2024-07-14 04:48:26.030691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.079 [2024-07-14 04:48:26.030705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.079 [2024-07-14 04:48:26.030720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db990 is same with the state(5) to be set 00:32:06.079 [2024-07-14 04:48:26.040502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23db990 (9): Bad file descriptor 00:32:06.079 [2024-07-14 04:48:26.050548] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:07.011 04:48:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.011 04:48:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.011 04:48:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.011 04:48:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.011 04:48:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.011 04:48:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.011 04:48:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.011 [2024-07-14 04:48:27.101898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:07.011 [2024-07-14 04:48:27.101947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23db990 with addr=10.0.0.2, port=4420 00:32:07.011 [2024-07-14 04:48:27.101969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db990 is same with the state(5) to be set 00:32:07.011 [2024-07-14 04:48:27.102001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23db990 (9): Bad file descriptor 00:32:07.011 [2024-07-14 04:48:27.102379] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:07.011 [2024-07-14 04:48:27.102416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:07.011 [2024-07-14 04:48:27.102433] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:07.011 [2024-07-14 04:48:27.102453] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:07.011 [2024-07-14 04:48:27.102477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.011 [2024-07-14 04:48:27.102494] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:07.011 04:48:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.011 04:48:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.011 04:48:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.944 [2024-07-14 04:48:28.104983] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:07.944 [2024-07-14 04:48:28.105010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:07.944 [2024-07-14 04:48:28.105029] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:07.944 [2024-07-14 04:48:28.105041] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:07.944 [2024-07-14 04:48:28.105060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.944 [2024-07-14 04:48:28.105091] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:07.944 [2024-07-14 04:48:28.105121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.944 [2024-07-14 04:48:28.105139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.944 [2024-07-14 04:48:28.105173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.944 [2024-07-14 04:48:28.105188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.944 [2024-07-14 04:48:28.105203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.944 [2024-07-14 04:48:28.105217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.944 [2024-07-14 04:48:28.105231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.944 [2024-07-14 04:48:28.105246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.944 [2024-07-14 04:48:28.105260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.944 [2024-07-14 04:48:28.105274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.944 [2024-07-14 04:48:28.105288] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:07.944 [2024-07-14 04:48:28.105631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dade0 (9): Bad file descriptor 00:32:07.944 [2024-07-14 04:48:28.106650] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:07.944 [2024-07-14 04:48:28.106675] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:07.944 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.944 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.944 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.944 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.944 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.944 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.944 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.944 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:08.202 04:48:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:09.136 04:48:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.096 [2024-07-14 04:48:30.123947] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:10.096 [2024-07-14 04:48:30.123994] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:10.096 [2024-07-14 04:48:30.124018] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:10.096 [2024-07-14 04:48:30.250428] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:10.353 04:48:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.353 [2024-07-14 04:48:30.354648] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:10.353 [2024-07-14 04:48:30.354710] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:10.353 [2024-07-14 04:48:30.354752] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:10.353 [2024-07-14 04:48:30.354779] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:10.354 [2024-07-14 04:48:30.354794] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:10.354 [2024-07-14 04:48:30.362215] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23f5f60 was disconnected and freed. delete nvme_qpair. 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2922838 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2922838 ']' 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2922838 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2922838 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2922838' 00:32:11.287 killing process with pid 2922838 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2922838 00:32:11.287 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2922838 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:11.545 rmmod nvme_tcp 00:32:11.545 rmmod nvme_fabrics 00:32:11.545 rmmod nvme_keyring 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2922811 ']' 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2922811 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2922811 ']' 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2922811 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2922811 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2922811' 00:32:11.545 killing process with pid 2922811 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2922811 00:32:11.545 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2922811 00:32:11.804 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:11.804 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:11.804 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:11.804 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:11.804 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:11.804 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.804 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:11.804 04:48:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.336 04:48:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:14.336 00:32:14.336 real 0m17.447s 00:32:14.336 user 0m25.459s 00:32:14.336 sys 0m2.895s 00:32:14.336 04:48:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:14.336 04:48:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.336 ************************************ 00:32:14.336 END TEST nvmf_discovery_remove_ifc 00:32:14.336 ************************************ 00:32:14.336 04:48:34 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:14.336 04:48:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:14.336 04:48:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:14.336 04:48:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.336 ************************************ 00:32:14.336 START TEST nvmf_identify_kernel_target 00:32:14.336 ************************************ 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:14.336 * Looking for test storage... 00:32:14.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:14.336 04:48:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:16.237 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:16.237 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:16.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:16.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:16.237 04:48:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:16.237 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:16.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:16.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:32:16.238 00:32:16.238 --- 10.0.0.2 ping statistics --- 00:32:16.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.238 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:16.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:16.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:32:16.238 00:32:16.238 --- 10.0.0.1 ping statistics --- 00:32:16.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.238 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:16.238 04:48:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:17.170 Waiting for block devices as requested 00:32:17.170 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:17.170 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:17.428 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:17.428 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:17.428 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:17.687 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:17.687 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:17.687 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:17.687 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:17.687 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:17.945 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:17.945 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:17.945 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:18.203 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:18.203 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:18.203 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:18.203 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:18.462 No valid GPT data, bailing 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:18.462 00:32:18.462 Discovery Log Number of Records 2, Generation counter 2 00:32:18.462 =====Discovery Log Entry 0====== 00:32:18.462 trtype: tcp 00:32:18.462 adrfam: ipv4 00:32:18.462 subtype: current discovery subsystem 00:32:18.462 treq: not specified, sq flow control disable supported 00:32:18.462 portid: 1 00:32:18.462 trsvcid: 4420 00:32:18.462 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:18.462 traddr: 10.0.0.1 00:32:18.462 eflags: none 00:32:18.462 sectype: none 00:32:18.462 =====Discovery Log Entry 1====== 00:32:18.462 trtype: tcp 00:32:18.462 adrfam: ipv4 00:32:18.462 subtype: nvme subsystem 00:32:18.462 treq: not specified, sq flow control disable supported 00:32:18.462 portid: 1 00:32:18.462 trsvcid: 4420 00:32:18.462 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:18.462 traddr: 10.0.0.1 00:32:18.462 eflags: none 00:32:18.462 sectype: none 00:32:18.462 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:18.462 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:18.462 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.722 ===================================================== 00:32:18.722 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:18.722 ===================================================== 00:32:18.722 Controller Capabilities/Features 00:32:18.722 ================================ 00:32:18.722 Vendor ID: 0000 00:32:18.722 Subsystem Vendor ID: 0000 00:32:18.722 Serial Number: 74ef76b4dbeb2fac42fd 00:32:18.722 Model Number: Linux 00:32:18.722 Firmware Version: 6.7.0-68 00:32:18.722 Recommended Arb Burst: 0 00:32:18.722 IEEE OUI Identifier: 00 00 00 00:32:18.722 Multi-path I/O 00:32:18.722 May have multiple subsystem ports: No 00:32:18.722 May have multiple controllers: No 00:32:18.722 Associated with SR-IOV VF: No 00:32:18.722 Max Data Transfer Size: Unlimited 00:32:18.722 Max Number of Namespaces: 0 00:32:18.722 Max Number of I/O Queues: 1024 00:32:18.722 NVMe Specification Version (VS): 1.3 00:32:18.722 NVMe Specification Version (Identify): 1.3 00:32:18.722 Maximum Queue Entries: 1024 00:32:18.722 Contiguous Queues Required: No 00:32:18.722 Arbitration Mechanisms Supported 00:32:18.722 Weighted Round Robin: Not Supported 00:32:18.722 Vendor Specific: Not Supported 00:32:18.722 Reset Timeout: 7500 ms 00:32:18.722 Doorbell Stride: 4 bytes 00:32:18.722 NVM Subsystem Reset: Not Supported 00:32:18.722 Command Sets Supported 00:32:18.722 NVM Command Set: Supported 00:32:18.722 Boot Partition: Not Supported 00:32:18.722 Memory Page Size Minimum: 4096 bytes 00:32:18.722 Memory Page Size Maximum: 4096 bytes 00:32:18.722 Persistent Memory Region: Not Supported 00:32:18.722 Optional Asynchronous Events Supported 00:32:18.722 Namespace Attribute Notices: Not Supported 00:32:18.722 Firmware Activation Notices: Not Supported 00:32:18.722 ANA Change Notices: Not Supported 00:32:18.722 PLE Aggregate Log Change Notices: Not Supported 00:32:18.722 LBA Status Info Alert Notices: Not Supported 00:32:18.722 EGE Aggregate Log Change Notices: Not Supported 00:32:18.722 Normal NVM Subsystem Shutdown event: Not Supported 00:32:18.722 Zone Descriptor Change Notices: Not Supported 00:32:18.722 Discovery Log Change Notices: Supported 00:32:18.722 Controller Attributes 00:32:18.722 128-bit Host Identifier: Not Supported 00:32:18.722 Non-Operational Permissive Mode: Not Supported 00:32:18.722 NVM Sets: Not Supported 00:32:18.722 Read Recovery Levels: Not Supported 00:32:18.722 Endurance Groups: Not Supported 00:32:18.722 Predictable Latency Mode: Not Supported 00:32:18.722 Traffic Based Keep ALive: Not Supported 00:32:18.722 Namespace Granularity: Not Supported 00:32:18.722 SQ Associations: Not Supported 00:32:18.722 UUID List: Not Supported 00:32:18.722 Multi-Domain Subsystem: Not Supported 00:32:18.722 Fixed Capacity Management: Not Supported 00:32:18.722 Variable Capacity Management: Not Supported 00:32:18.722 Delete Endurance Group: Not Supported 00:32:18.722 Delete NVM Set: Not Supported 00:32:18.722 Extended LBA Formats Supported: Not Supported 00:32:18.722 Flexible Data Placement Supported: Not Supported 00:32:18.722 00:32:18.722 Controller Memory Buffer Support 00:32:18.722 ================================ 00:32:18.722 Supported: No 00:32:18.722 00:32:18.722 Persistent Memory Region Support 00:32:18.722 ================================ 00:32:18.722 Supported: No 00:32:18.722 00:32:18.722 Admin Command Set Attributes 00:32:18.722 ============================ 00:32:18.722 Security Send/Receive: Not Supported 00:32:18.722 Format NVM: Not Supported 00:32:18.722 Firmware Activate/Download: Not Supported 00:32:18.722 Namespace Management: Not Supported 00:32:18.722 Device Self-Test: Not Supported 00:32:18.722 Directives: Not Supported 00:32:18.722 NVMe-MI: Not Supported 00:32:18.722 Virtualization Management: Not Supported 00:32:18.722 Doorbell Buffer Config: Not Supported 00:32:18.722 Get LBA Status Capability: Not Supported 00:32:18.722 Command & Feature Lockdown Capability: Not Supported 00:32:18.722 Abort Command Limit: 1 00:32:18.722 Async Event Request Limit: 1 00:32:18.722 Number of Firmware Slots: N/A 00:32:18.722 Firmware Slot 1 Read-Only: N/A 00:32:18.722 Firmware Activation Without Reset: N/A 00:32:18.722 Multiple Update Detection Support: N/A 00:32:18.722 Firmware Update Granularity: No Information Provided 00:32:18.722 Per-Namespace SMART Log: No 00:32:18.722 Asymmetric Namespace Access Log Page: Not Supported 00:32:18.722 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:18.722 Command Effects Log Page: Not Supported 00:32:18.722 Get Log Page Extended Data: Supported 00:32:18.722 Telemetry Log Pages: Not Supported 00:32:18.722 Persistent Event Log Pages: Not Supported 00:32:18.722 Supported Log Pages Log Page: May Support 00:32:18.722 Commands Supported & Effects Log Page: Not Supported 00:32:18.722 Feature Identifiers & Effects Log Page:May Support 00:32:18.722 NVMe-MI Commands & Effects Log Page: May Support 00:32:18.722 Data Area 4 for Telemetry Log: Not Supported 00:32:18.722 Error Log Page Entries Supported: 1 00:32:18.722 Keep Alive: Not Supported 00:32:18.722 00:32:18.722 NVM Command Set Attributes 00:32:18.722 ========================== 00:32:18.722 Submission Queue Entry Size 00:32:18.722 Max: 1 00:32:18.722 Min: 1 00:32:18.722 Completion Queue Entry Size 00:32:18.722 Max: 1 00:32:18.722 Min: 1 00:32:18.722 Number of Namespaces: 0 00:32:18.722 Compare Command: Not Supported 00:32:18.722 Write Uncorrectable Command: Not Supported 00:32:18.722 Dataset Management Command: Not Supported 00:32:18.723 Write Zeroes Command: Not Supported 00:32:18.723 Set Features Save Field: Not Supported 00:32:18.723 Reservations: Not Supported 00:32:18.723 Timestamp: Not Supported 00:32:18.723 Copy: Not Supported 00:32:18.723 Volatile Write Cache: Not Present 00:32:18.723 Atomic Write Unit (Normal): 1 00:32:18.723 Atomic Write Unit (PFail): 1 00:32:18.723 Atomic Compare & Write Unit: 1 00:32:18.723 Fused Compare & Write: Not Supported 00:32:18.723 Scatter-Gather List 00:32:18.723 SGL Command Set: Supported 00:32:18.723 SGL Keyed: Not Supported 00:32:18.723 SGL Bit Bucket Descriptor: Not Supported 00:32:18.723 SGL Metadata Pointer: Not Supported 00:32:18.723 Oversized SGL: Not Supported 00:32:18.723 SGL Metadata Address: Not Supported 00:32:18.723 SGL Offset: Supported 00:32:18.723 Transport SGL Data Block: Not Supported 00:32:18.723 Replay Protected Memory Block: Not Supported 00:32:18.723 00:32:18.723 Firmware Slot Information 00:32:18.723 ========================= 00:32:18.723 Active slot: 0 00:32:18.723 00:32:18.723 00:32:18.723 Error Log 00:32:18.723 ========= 00:32:18.723 00:32:18.723 Active Namespaces 00:32:18.723 ================= 00:32:18.723 Discovery Log Page 00:32:18.723 ================== 00:32:18.723 Generation Counter: 2 00:32:18.723 Number of Records: 2 00:32:18.723 Record Format: 0 00:32:18.723 00:32:18.723 Discovery Log Entry 0 00:32:18.723 ---------------------- 00:32:18.723 Transport Type: 3 (TCP) 00:32:18.723 Address Family: 1 (IPv4) 00:32:18.723 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:18.723 Entry Flags: 00:32:18.723 Duplicate Returned Information: 0 00:32:18.723 Explicit Persistent Connection Support for Discovery: 0 00:32:18.723 Transport Requirements: 00:32:18.723 Secure Channel: Not Specified 00:32:18.723 Port ID: 1 (0x0001) 00:32:18.723 Controller ID: 65535 (0xffff) 00:32:18.723 Admin Max SQ Size: 32 00:32:18.723 Transport Service Identifier: 4420 00:32:18.723 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:18.723 Transport Address: 10.0.0.1 00:32:18.723 Discovery Log Entry 1 00:32:18.723 ---------------------- 00:32:18.723 Transport Type: 3 (TCP) 00:32:18.723 Address Family: 1 (IPv4) 00:32:18.723 Subsystem Type: 2 (NVM Subsystem) 00:32:18.723 Entry Flags: 00:32:18.723 Duplicate Returned Information: 0 00:32:18.723 Explicit Persistent Connection Support for Discovery: 0 00:32:18.723 Transport Requirements: 00:32:18.723 Secure Channel: Not Specified 00:32:18.723 Port ID: 1 (0x0001) 00:32:18.723 Controller ID: 65535 (0xffff) 00:32:18.723 Admin Max SQ Size: 32 00:32:18.723 Transport Service Identifier: 4420 00:32:18.723 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:18.723 Transport Address: 10.0.0.1 00:32:18.723 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:18.723 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.723 get_feature(0x01) failed 00:32:18.723 get_feature(0x02) failed 00:32:18.723 get_feature(0x04) failed 00:32:18.723 ===================================================== 00:32:18.723 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:18.723 ===================================================== 00:32:18.723 Controller Capabilities/Features 00:32:18.723 ================================ 00:32:18.723 Vendor ID: 0000 00:32:18.723 Subsystem Vendor ID: 0000 00:32:18.723 Serial Number: 4502a7dc4c058fb7cf5a 00:32:18.723 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:18.723 Firmware Version: 6.7.0-68 00:32:18.723 Recommended Arb Burst: 6 00:32:18.723 IEEE OUI Identifier: 00 00 00 00:32:18.723 Multi-path I/O 00:32:18.723 May have multiple subsystem ports: Yes 00:32:18.723 May have multiple controllers: Yes 00:32:18.723 Associated with SR-IOV VF: No 00:32:18.723 Max Data Transfer Size: Unlimited 00:32:18.723 Max Number of Namespaces: 1024 00:32:18.723 Max Number of I/O Queues: 128 00:32:18.723 NVMe Specification Version (VS): 1.3 00:32:18.723 NVMe Specification Version (Identify): 1.3 00:32:18.723 Maximum Queue Entries: 1024 00:32:18.723 Contiguous Queues Required: No 00:32:18.723 Arbitration Mechanisms Supported 00:32:18.723 Weighted Round Robin: Not Supported 00:32:18.723 Vendor Specific: Not Supported 00:32:18.723 Reset Timeout: 7500 ms 00:32:18.723 Doorbell Stride: 4 bytes 00:32:18.723 NVM Subsystem Reset: Not Supported 00:32:18.723 Command Sets Supported 00:32:18.723 NVM Command Set: Supported 00:32:18.723 Boot Partition: Not Supported 00:32:18.723 Memory Page Size Minimum: 4096 bytes 00:32:18.723 Memory Page Size Maximum: 4096 bytes 00:32:18.723 Persistent Memory Region: Not Supported 00:32:18.723 Optional Asynchronous Events Supported 00:32:18.723 Namespace Attribute Notices: Supported 00:32:18.723 Firmware Activation Notices: Not Supported 00:32:18.723 ANA Change Notices: Supported 00:32:18.723 PLE Aggregate Log Change Notices: Not Supported 00:32:18.723 LBA Status Info Alert Notices: Not Supported 00:32:18.723 EGE Aggregate Log Change Notices: Not Supported 00:32:18.723 Normal NVM Subsystem Shutdown event: Not Supported 00:32:18.723 Zone Descriptor Change Notices: Not Supported 00:32:18.723 Discovery Log Change Notices: Not Supported 00:32:18.723 Controller Attributes 00:32:18.723 128-bit Host Identifier: Supported 00:32:18.723 Non-Operational Permissive Mode: Not Supported 00:32:18.723 NVM Sets: Not Supported 00:32:18.723 Read Recovery Levels: Not Supported 00:32:18.723 Endurance Groups: Not Supported 00:32:18.723 Predictable Latency Mode: Not Supported 00:32:18.723 Traffic Based Keep ALive: Supported 00:32:18.723 Namespace Granularity: Not Supported 00:32:18.723 SQ Associations: Not Supported 00:32:18.723 UUID List: Not Supported 00:32:18.723 Multi-Domain Subsystem: Not Supported 00:32:18.723 Fixed Capacity Management: Not Supported 00:32:18.723 Variable Capacity Management: Not Supported 00:32:18.723 Delete Endurance Group: Not Supported 00:32:18.723 Delete NVM Set: Not Supported 00:32:18.723 Extended LBA Formats Supported: Not Supported 00:32:18.723 Flexible Data Placement Supported: Not Supported 00:32:18.723 00:32:18.723 Controller Memory Buffer Support 00:32:18.723 ================================ 00:32:18.723 Supported: No 00:32:18.723 00:32:18.723 Persistent Memory Region Support 00:32:18.723 ================================ 00:32:18.723 Supported: No 00:32:18.723 00:32:18.723 Admin Command Set Attributes 00:32:18.723 ============================ 00:32:18.723 Security Send/Receive: Not Supported 00:32:18.723 Format NVM: Not Supported 00:32:18.723 Firmware Activate/Download: Not Supported 00:32:18.723 Namespace Management: Not Supported 00:32:18.723 Device Self-Test: Not Supported 00:32:18.723 Directives: Not Supported 00:32:18.723 NVMe-MI: Not Supported 00:32:18.723 Virtualization Management: Not Supported 00:32:18.723 Doorbell Buffer Config: Not Supported 00:32:18.723 Get LBA Status Capability: Not Supported 00:32:18.723 Command & Feature Lockdown Capability: Not Supported 00:32:18.723 Abort Command Limit: 4 00:32:18.723 Async Event Request Limit: 4 00:32:18.723 Number of Firmware Slots: N/A 00:32:18.723 Firmware Slot 1 Read-Only: N/A 00:32:18.723 Firmware Activation Without Reset: N/A 00:32:18.723 Multiple Update Detection Support: N/A 00:32:18.723 Firmware Update Granularity: No Information Provided 00:32:18.723 Per-Namespace SMART Log: Yes 00:32:18.723 Asymmetric Namespace Access Log Page: Supported 00:32:18.723 ANA Transition Time : 10 sec 00:32:18.723 00:32:18.723 Asymmetric Namespace Access Capabilities 00:32:18.723 ANA Optimized State : Supported 00:32:18.723 ANA Non-Optimized State : Supported 00:32:18.723 ANA Inaccessible State : Supported 00:32:18.723 ANA Persistent Loss State : Supported 00:32:18.723 ANA Change State : Supported 00:32:18.723 ANAGRPID is not changed : No 00:32:18.723 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:18.723 00:32:18.723 ANA Group Identifier Maximum : 128 00:32:18.723 Number of ANA Group Identifiers : 128 00:32:18.723 Max Number of Allowed Namespaces : 1024 00:32:18.723 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:18.723 Command Effects Log Page: Supported 00:32:18.723 Get Log Page Extended Data: Supported 00:32:18.723 Telemetry Log Pages: Not Supported 00:32:18.723 Persistent Event Log Pages: Not Supported 00:32:18.723 Supported Log Pages Log Page: May Support 00:32:18.723 Commands Supported & Effects Log Page: Not Supported 00:32:18.723 Feature Identifiers & Effects Log Page:May Support 00:32:18.723 NVMe-MI Commands & Effects Log Page: May Support 00:32:18.723 Data Area 4 for Telemetry Log: Not Supported 00:32:18.723 Error Log Page Entries Supported: 128 00:32:18.723 Keep Alive: Supported 00:32:18.723 Keep Alive Granularity: 1000 ms 00:32:18.723 00:32:18.723 NVM Command Set Attributes 00:32:18.723 ========================== 00:32:18.723 Submission Queue Entry Size 00:32:18.723 Max: 64 00:32:18.723 Min: 64 00:32:18.723 Completion Queue Entry Size 00:32:18.723 Max: 16 00:32:18.723 Min: 16 00:32:18.723 Number of Namespaces: 1024 00:32:18.724 Compare Command: Not Supported 00:32:18.724 Write Uncorrectable Command: Not Supported 00:32:18.724 Dataset Management Command: Supported 00:32:18.724 Write Zeroes Command: Supported 00:32:18.724 Set Features Save Field: Not Supported 00:32:18.724 Reservations: Not Supported 00:32:18.724 Timestamp: Not Supported 00:32:18.724 Copy: Not Supported 00:32:18.724 Volatile Write Cache: Present 00:32:18.724 Atomic Write Unit (Normal): 1 00:32:18.724 Atomic Write Unit (PFail): 1 00:32:18.724 Atomic Compare & Write Unit: 1 00:32:18.724 Fused Compare & Write: Not Supported 00:32:18.724 Scatter-Gather List 00:32:18.724 SGL Command Set: Supported 00:32:18.724 SGL Keyed: Not Supported 00:32:18.724 SGL Bit Bucket Descriptor: Not Supported 00:32:18.724 SGL Metadata Pointer: Not Supported 00:32:18.724 Oversized SGL: Not Supported 00:32:18.724 SGL Metadata Address: Not Supported 00:32:18.724 SGL Offset: Supported 00:32:18.724 Transport SGL Data Block: Not Supported 00:32:18.724 Replay Protected Memory Block: Not Supported 00:32:18.724 00:32:18.724 Firmware Slot Information 00:32:18.724 ========================= 00:32:18.724 Active slot: 0 00:32:18.724 00:32:18.724 Asymmetric Namespace Access 00:32:18.724 =========================== 00:32:18.724 Change Count : 0 00:32:18.724 Number of ANA Group Descriptors : 1 00:32:18.724 ANA Group Descriptor : 0 00:32:18.724 ANA Group ID : 1 00:32:18.724 Number of NSID Values : 1 00:32:18.724 Change Count : 0 00:32:18.724 ANA State : 1 00:32:18.724 Namespace Identifier : 1 00:32:18.724 00:32:18.724 Commands Supported and Effects 00:32:18.724 ============================== 00:32:18.724 Admin Commands 00:32:18.724 -------------- 00:32:18.724 Get Log Page (02h): Supported 00:32:18.724 Identify (06h): Supported 00:32:18.724 Abort (08h): Supported 00:32:18.724 Set Features (09h): Supported 00:32:18.724 Get Features (0Ah): Supported 00:32:18.724 Asynchronous Event Request (0Ch): Supported 00:32:18.724 Keep Alive (18h): Supported 00:32:18.724 I/O Commands 00:32:18.724 ------------ 00:32:18.724 Flush (00h): Supported 00:32:18.724 Write (01h): Supported LBA-Change 00:32:18.724 Read (02h): Supported 00:32:18.724 Write Zeroes (08h): Supported LBA-Change 00:32:18.724 Dataset Management (09h): Supported 00:32:18.724 00:32:18.724 Error Log 00:32:18.724 ========= 00:32:18.724 Entry: 0 00:32:18.724 Error Count: 0x3 00:32:18.724 Submission Queue Id: 0x0 00:32:18.724 Command Id: 0x5 00:32:18.724 Phase Bit: 0 00:32:18.724 Status Code: 0x2 00:32:18.724 Status Code Type: 0x0 00:32:18.724 Do Not Retry: 1 00:32:18.724 Error Location: 0x28 00:32:18.724 LBA: 0x0 00:32:18.724 Namespace: 0x0 00:32:18.724 Vendor Log Page: 0x0 00:32:18.724 ----------- 00:32:18.724 Entry: 1 00:32:18.724 Error Count: 0x2 00:32:18.724 Submission Queue Id: 0x0 00:32:18.724 Command Id: 0x5 00:32:18.724 Phase Bit: 0 00:32:18.724 Status Code: 0x2 00:32:18.724 Status Code Type: 0x0 00:32:18.724 Do Not Retry: 1 00:32:18.724 Error Location: 0x28 00:32:18.724 LBA: 0x0 00:32:18.724 Namespace: 0x0 00:32:18.724 Vendor Log Page: 0x0 00:32:18.724 ----------- 00:32:18.724 Entry: 2 00:32:18.724 Error Count: 0x1 00:32:18.724 Submission Queue Id: 0x0 00:32:18.724 Command Id: 0x4 00:32:18.724 Phase Bit: 0 00:32:18.724 Status Code: 0x2 00:32:18.724 Status Code Type: 0x0 00:32:18.724 Do Not Retry: 1 00:32:18.724 Error Location: 0x28 00:32:18.724 LBA: 0x0 00:32:18.724 Namespace: 0x0 00:32:18.724 Vendor Log Page: 0x0 00:32:18.724 00:32:18.724 Number of Queues 00:32:18.724 ================ 00:32:18.724 Number of I/O Submission Queues: 128 00:32:18.724 Number of I/O Completion Queues: 128 00:32:18.724 00:32:18.724 ZNS Specific Controller Data 00:32:18.724 ============================ 00:32:18.724 Zone Append Size Limit: 0 00:32:18.724 00:32:18.724 00:32:18.724 Active Namespaces 00:32:18.724 ================= 00:32:18.724 get_feature(0x05) failed 00:32:18.724 Namespace ID:1 00:32:18.724 Command Set Identifier: NVM (00h) 00:32:18.724 Deallocate: Supported 00:32:18.724 Deallocated/Unwritten Error: Not Supported 00:32:18.724 Deallocated Read Value: Unknown 00:32:18.724 Deallocate in Write Zeroes: Not Supported 00:32:18.724 Deallocated Guard Field: 0xFFFF 00:32:18.724 Flush: Supported 00:32:18.724 Reservation: Not Supported 00:32:18.724 Namespace Sharing Capabilities: Multiple Controllers 00:32:18.724 Size (in LBAs): 1953525168 (931GiB) 00:32:18.724 Capacity (in LBAs): 1953525168 (931GiB) 00:32:18.724 Utilization (in LBAs): 1953525168 (931GiB) 00:32:18.724 UUID: ba6d90fe-3ddb-43bf-8756-adc8f077c40b 00:32:18.724 Thin Provisioning: Not Supported 00:32:18.724 Per-NS Atomic Units: Yes 00:32:18.724 Atomic Boundary Size (Normal): 0 00:32:18.724 Atomic Boundary Size (PFail): 0 00:32:18.724 Atomic Boundary Offset: 0 00:32:18.724 NGUID/EUI64 Never Reused: No 00:32:18.724 ANA group ID: 1 00:32:18.724 Namespace Write Protected: No 00:32:18.724 Number of LBA Formats: 1 00:32:18.724 Current LBA Format: LBA Format #00 00:32:18.724 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:18.724 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.724 rmmod nvme_tcp 00:32:18.724 rmmod nvme_fabrics 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.724 04:48:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:21.256 04:48:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:22.192 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:22.192 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:22.192 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:22.192 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:22.192 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:22.192 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:22.192 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:22.192 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:22.192 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:22.192 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:22.192 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:22.192 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:22.192 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:22.192 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:22.192 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:22.192 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:23.131 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:23.131 00:32:23.131 real 0m9.182s 00:32:23.131 user 0m1.944s 00:32:23.131 sys 0m3.195s 00:32:23.132 04:48:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:23.132 04:48:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:23.132 ************************************ 00:32:23.132 END TEST nvmf_identify_kernel_target 00:32:23.132 ************************************ 00:32:23.132 04:48:43 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:23.132 04:48:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:23.132 04:48:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:23.132 04:48:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.132 ************************************ 00:32:23.132 START TEST nvmf_auth_host 00:32:23.132 ************************************ 00:32:23.132 04:48:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:23.398 * Looking for test storage... 00:32:23.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:23.398 04:48:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:25.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:25.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:25.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:25.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:25.299 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:25.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:32:25.300 00:32:25.300 --- 10.0.0.2 ping statistics --- 00:32:25.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.300 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:32:25.300 00:32:25.300 --- 10.0.0.1 ping statistics --- 00:32:25.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.300 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2929860 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2929860 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 2929860 ']' 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:25.300 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=da04ec327b37ccee6451f738daad79ab 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oP9 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key da04ec327b37ccee6451f738daad79ab 0 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 da04ec327b37ccee6451f738daad79ab 0 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=da04ec327b37ccee6451f738daad79ab 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:25.558 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oP9 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oP9 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.oP9 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4c5ec4f0dd282e7d8d76f8a8ac4184edfbc99c346f8f528d453fe7cf04a43776 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.BZA 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4c5ec4f0dd282e7d8d76f8a8ac4184edfbc99c346f8f528d453fe7cf04a43776 3 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4c5ec4f0dd282e7d8d76f8a8ac4184edfbc99c346f8f528d453fe7cf04a43776 3 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4c5ec4f0dd282e7d8d76f8a8ac4184edfbc99c346f8f528d453fe7cf04a43776 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.BZA 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.BZA 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.BZA 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e591a61dc66a088e782d9c964dbab26bb8fb5e72c4038006 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Mti 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e591a61dc66a088e782d9c964dbab26bb8fb5e72c4038006 0 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e591a61dc66a088e782d9c964dbab26bb8fb5e72c4038006 0 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e591a61dc66a088e782d9c964dbab26bb8fb5e72c4038006 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Mti 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Mti 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Mti 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=915d5b6c213b3d3afb93ea92ccc08ea7c0ede960408b17ff 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JMo 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 915d5b6c213b3d3afb93ea92ccc08ea7c0ede960408b17ff 2 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 915d5b6c213b3d3afb93ea92ccc08ea7c0ede960408b17ff 2 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=915d5b6c213b3d3afb93ea92ccc08ea7c0ede960408b17ff 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JMo 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JMo 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.JMo 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e684e860ff9127229b51c2afa68a967 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.h3p 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e684e860ff9127229b51c2afa68a967 1 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e684e860ff9127229b51c2afa68a967 1 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e684e860ff9127229b51c2afa68a967 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.h3p 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.h3p 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.h3p 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3909374db231ddf530f4169d1324c985 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:25.817 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.noP 00:32:25.818 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3909374db231ddf530f4169d1324c985 1 00:32:25.818 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3909374db231ddf530f4169d1324c985 1 00:32:25.818 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:25.818 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:25.818 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3909374db231ddf530f4169d1324c985 00:32:25.818 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:25.818 04:48:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.noP 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.noP 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.noP 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af2d71b1e3fd22ccb9bdb637a6caac72ee64f0864cef076f 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OyA 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af2d71b1e3fd22ccb9bdb637a6caac72ee64f0864cef076f 2 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af2d71b1e3fd22ccb9bdb637a6caac72ee64f0864cef076f 2 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af2d71b1e3fd22ccb9bdb637a6caac72ee64f0864cef076f 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OyA 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OyA 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.OyA 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb23e9f2d8dcc611acea03eb33a62596 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:26.076 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.U2K 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb23e9f2d8dcc611acea03eb33a62596 0 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb23e9f2d8dcc611acea03eb33a62596 0 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb23e9f2d8dcc611acea03eb33a62596 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.U2K 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.U2K 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.U2K 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=63c32ba401165b04f86fc9c3946397f3bb5d7f6ac990243ea0f156073646d3d3 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wCT 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 63c32ba401165b04f86fc9c3946397f3bb5d7f6ac990243ea0f156073646d3d3 3 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 63c32ba401165b04f86fc9c3946397f3bb5d7f6ac990243ea0f156073646d3d3 3 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=63c32ba401165b04f86fc9c3946397f3bb5d7f6ac990243ea0f156073646d3d3 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wCT 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wCT 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.wCT 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2929860 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 2929860 ']' 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:26.077 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oP9 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.BZA ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BZA 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Mti 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.JMo ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JMo 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.h3p 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.noP ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.noP 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.OyA 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.U2K ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.U2K 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.wCT 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:26.336 04:48:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:27.718 Waiting for block devices as requested 00:32:27.718 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:27.718 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:27.994 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:27.994 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:28.252 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:28.252 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:28.252 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:28.252 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:28.511 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:28.511 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:28.511 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:28.511 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:28.769 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:28.769 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:28.769 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:28.769 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:29.027 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:29.285 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:29.285 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:29.285 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:29.285 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:29.285 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:29.285 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:29.286 No valid GPT data, bailing 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:29.286 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:29.544 00:32:29.544 Discovery Log Number of Records 2, Generation counter 2 00:32:29.544 =====Discovery Log Entry 0====== 00:32:29.544 trtype: tcp 00:32:29.544 adrfam: ipv4 00:32:29.544 subtype: current discovery subsystem 00:32:29.544 treq: not specified, sq flow control disable supported 00:32:29.544 portid: 1 00:32:29.544 trsvcid: 4420 00:32:29.544 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:29.544 traddr: 10.0.0.1 00:32:29.544 eflags: none 00:32:29.544 sectype: none 00:32:29.544 =====Discovery Log Entry 1====== 00:32:29.544 trtype: tcp 00:32:29.544 adrfam: ipv4 00:32:29.544 subtype: nvme subsystem 00:32:29.544 treq: not specified, sq flow control disable supported 00:32:29.544 portid: 1 00:32:29.544 trsvcid: 4420 00:32:29.544 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:29.544 traddr: 10.0.0.1 00:32:29.544 eflags: none 00:32:29.544 sectype: none 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:29.544 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.545 nvme0n1 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.545 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.802 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.803 nvme0n1 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.803 04:48:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:30.060 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.061 nvme0n1 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.061 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.319 nvme0n1 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.319 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.576 nvme0n1 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.576 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.577 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.577 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.577 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.577 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.577 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.577 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:30.577 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.577 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.834 nvme0n1 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.834 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.835 04:48:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.093 nvme0n1 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.093 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.351 nvme0n1 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:31.351 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.352 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.609 nvme0n1 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.609 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.867 nvme0n1 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.867 04:48:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.125 nvme0n1 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.125 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.383 nvme0n1 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.383 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.948 nvme0n1 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.948 04:48:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.205 nvme0n1 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.205 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.462 nvme0n1 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.462 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.028 nvme0n1 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.028 04:48:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.593 nvme0n1 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.593 04:48:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.158 nvme0n1 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.158 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.724 nvme0n1 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.724 04:48:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.289 nvme0n1 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.289 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.856 nvme0n1 00:32:36.856 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.856 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.856 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.856 04:48:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.856 04:48:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.856 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.856 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.856 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.856 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.856 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.114 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.115 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.049 nvme0n1 00:32:38.049 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.049 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.049 04:48:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.049 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.049 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.049 04:48:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.049 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.983 nvme0n1 00:32:38.983 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.983 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.983 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.983 04:48:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.983 04:48:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:38.983 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.984 04:48:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.919 nvme0n1 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.919 04:49:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.290 nvme0n1 00:32:41.290 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.290 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.290 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.290 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.290 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.290 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.290 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.291 04:49:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.223 nvme0n1 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.223 nvme0n1 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.223 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.224 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.480 nvme0n1 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:42.480 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.481 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.758 nvme0n1 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.758 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.015 nvme0n1 00:32:43.015 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.015 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.015 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.015 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.015 04:49:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.015 04:49:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.015 nvme0n1 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.015 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.272 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.528 nvme0n1 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.528 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.785 nvme0n1 00:32:43.785 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.785 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.785 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.785 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.785 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.786 04:49:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.043 nvme0n1 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.043 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.301 nvme0n1 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.301 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.558 nvme0n1 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.558 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.559 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.559 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.559 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.559 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:44.559 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.559 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.816 nvme0n1 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:44.816 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.817 04:49:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.074 nvme0n1 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.074 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.333 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.591 nvme0n1 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:45.591 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.592 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.850 nvme0n1 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.850 04:49:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.850 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.851 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.416 nvme0n1 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.416 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.417 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.983 nvme0n1 00:32:46.983 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.983 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.983 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.983 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.983 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.983 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.983 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.983 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.984 04:49:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.550 nvme0n1 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.550 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.551 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.551 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.551 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.551 04:49:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.551 04:49:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:47.551 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.551 04:49:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.118 nvme0n1 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.118 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.684 nvme0n1 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.684 04:49:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.251 nvme0n1 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.251 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.509 04:49:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.444 nvme0n1 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.444 04:49:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.378 nvme0n1 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.378 04:49:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.312 nvme0n1 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.312 04:49:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.699 nvme0n1 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.699 04:49:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.632 nvme0n1 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.632 nvme0n1 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.632 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.633 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.633 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.633 04:49:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.633 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.633 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.633 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.891 nvme0n1 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.891 04:49:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.891 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.149 nvme0n1 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.149 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.407 nvme0n1 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:55.407 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.408 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.667 nvme0n1 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.667 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.928 nvme0n1 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.928 04:49:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.187 nvme0n1 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.187 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.447 nvme0n1 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.447 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.708 nvme0n1 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.708 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.968 nvme0n1 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.968 04:49:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.968 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.229 nvme0n1 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.229 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.514 nvme0n1 00:32:57.514 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.514 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.514 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.514 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.514 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.514 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.772 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.773 04:49:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.033 nvme0n1 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.033 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.293 nvme0n1 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.293 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.294 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.862 nvme0n1 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.862 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.863 04:49:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.429 nvme0n1 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.429 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.996 nvme0n1 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.996 04:49:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.996 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.565 nvme0n1 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.565 04:49:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.134 nvme0n1 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.134 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.705 nvme0n1 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwNGVjMzI3YjM3Y2NlZTY0NTFmNzM4ZGFhZDc5YWIuu9Jc: 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: ]] 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM1ZWM0ZjBkZDI4MmU3ZDhkNzZmOGE4YWM0MTg0ZWRmYmM5OWMzNDZmOGY1MjhkNDUzZmU3Y2YwNGE0Mzc3NkADaYc=: 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.705 04:49:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.646 nvme0n1 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.646 04:49:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.647 04:49:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.647 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.647 04:49:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.582 nvme0n1 00:33:03.582 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.582 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.582 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.582 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.582 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.582 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU2ODRlODYwZmY5MTI3MjI5YjUxYzJhZmE2OGE5NjcZzKto: 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: ]] 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzkwOTM3NGRiMjMxZGRmNTMwZjQxNjlkMTMyNGM5ODWX4JAL: 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.839 04:49:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.777 nvme0n1 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWYyZDcxYjFlM2ZkMjJjY2I5YmRiNjM3YTZjYWFjNzJlZTY0ZjA4NjRjZWYwNzZmJ7JS/A==: 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: ]] 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWIyM2U5ZjJkOGRjYzYxMWFjZWEwM2ViMzNhNjI1OTaCsrpG: 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.777 04:49:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.718 nvme0n1 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjNjMzJiYTQwMTE2NWIwNGY4NmZjOWMzOTQ2Mzk3ZjNiYjVkN2Y2YWM5OTAyNDNlYTBmMTU2MDczNjQ2ZDNkM8bUfoY=: 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.718 04:49:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.658 nvme0n1 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTU5MWE2MWRjNjZhMDg4ZTc4MmQ5Yzk2NGRiYWIyNmJiOGZiNWU3MmM0MDM4MDA2noGCXg==: 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTE1ZDViNmMyMTNiM2QzYWZiOTNlYTkyY2NjMDhlYTdjMGVkZTk2MDQwOGIxN2ZmzcHF+g==: 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.658 request: 00:33:06.658 { 00:33:06.658 "name": "nvme0", 00:33:06.658 "trtype": "tcp", 00:33:06.658 "traddr": "10.0.0.1", 00:33:06.658 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:06.658 "adrfam": "ipv4", 00:33:06.658 "trsvcid": "4420", 00:33:06.658 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:06.658 "method": "bdev_nvme_attach_controller", 00:33:06.658 "req_id": 1 00:33:06.658 } 00:33:06.658 Got JSON-RPC error response 00:33:06.658 response: 00:33:06.658 { 00:33:06.658 "code": -5, 00:33:06.658 "message": "Input/output error" 00:33:06.658 } 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:06.658 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.918 request: 00:33:06.918 { 00:33:06.918 "name": "nvme0", 00:33:06.918 "trtype": "tcp", 00:33:06.918 "traddr": "10.0.0.1", 00:33:06.918 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:06.918 "adrfam": "ipv4", 00:33:06.918 "trsvcid": "4420", 00:33:06.918 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:06.918 "dhchap_key": "key2", 00:33:06.918 "method": "bdev_nvme_attach_controller", 00:33:06.918 "req_id": 1 00:33:06.918 } 00:33:06.918 Got JSON-RPC error response 00:33:06.918 response: 00:33:06.918 { 00:33:06.918 "code": -5, 00:33:06.918 "message": "Input/output error" 00:33:06.918 } 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.918 04:49:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.918 request: 00:33:06.918 { 00:33:06.918 "name": "nvme0", 00:33:06.918 "trtype": "tcp", 00:33:06.918 "traddr": "10.0.0.1", 00:33:06.918 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:06.918 "adrfam": "ipv4", 00:33:06.918 "trsvcid": "4420", 00:33:06.918 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:06.918 "dhchap_key": "key1", 00:33:06.918 "dhchap_ctrlr_key": "ckey2", 00:33:06.918 "method": "bdev_nvme_attach_controller", 00:33:06.918 "req_id": 1 00:33:06.918 } 00:33:06.918 Got JSON-RPC error response 00:33:06.918 response: 00:33:06.918 { 00:33:06.918 "code": -5, 00:33:06.918 "message": "Input/output error" 00:33:06.918 } 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:06.918 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:06.918 rmmod nvme_tcp 00:33:07.178 rmmod nvme_fabrics 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2929860 ']' 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2929860 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 2929860 ']' 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 2929860 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2929860 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:07.178 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:07.179 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2929860' 00:33:07.179 killing process with pid 2929860 00:33:07.179 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 2929860 00:33:07.179 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 2929860 00:33:07.439 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:07.439 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:07.439 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:07.439 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:07.439 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:07.439 04:49:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.439 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:07.439 04:49:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:09.347 04:49:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:10.721 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:10.721 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:10.721 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:10.721 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:10.721 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:10.721 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:10.721 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:10.721 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:10.721 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:10.721 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:10.721 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:10.721 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:10.721 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:10.721 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:10.721 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:10.721 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:11.660 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:11.660 04:49:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.oP9 /tmp/spdk.key-null.Mti /tmp/spdk.key-sha256.h3p /tmp/spdk.key-sha384.OyA /tmp/spdk.key-sha512.wCT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:11.660 04:49:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:13.036 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:13.036 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:13.036 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:13.036 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:13.036 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:13.036 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:13.036 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:13.036 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:13.036 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:13.036 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:13.036 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:13.036 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:13.036 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:13.036 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:13.036 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:13.036 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:13.036 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:13.036 00:33:13.036 real 0m49.739s 00:33:13.036 user 0m47.607s 00:33:13.036 sys 0m5.779s 00:33:13.036 04:49:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:13.036 04:49:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.036 ************************************ 00:33:13.036 END TEST nvmf_auth_host 00:33:13.036 ************************************ 00:33:13.036 04:49:33 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:13.036 04:49:33 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:13.036 04:49:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:13.036 04:49:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:13.036 04:49:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.036 ************************************ 00:33:13.036 START TEST nvmf_digest 00:33:13.036 ************************************ 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:13.036 * Looking for test storage... 00:33:13.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:13.036 04:49:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:14.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:14.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:14.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:14.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:14.966 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:15.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:15.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:33:15.225 00:33:15.225 --- 10.0.0.2 ping statistics --- 00:33:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.225 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:15.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:15.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:33:15.225 00:33:15.225 --- 10.0.0.1 ping statistics --- 00:33:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.225 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:15.225 ************************************ 00:33:15.225 START TEST nvmf_digest_clean 00:33:15.225 ************************************ 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2939352 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2939352 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2939352 ']' 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:15.225 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:15.225 [2024-07-14 04:49:35.286615] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:15.225 [2024-07-14 04:49:35.286692] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:15.225 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.225 [2024-07-14 04:49:35.349257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.484 [2024-07-14 04:49:35.432149] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:15.484 [2024-07-14 04:49:35.432217] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:15.484 [2024-07-14 04:49:35.432243] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:15.484 [2024-07-14 04:49:35.432255] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:15.484 [2024-07-14 04:49:35.432264] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:15.484 [2024-07-14 04:49:35.432306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:15.484 null0 00:33:15.484 [2024-07-14 04:49:35.632162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.484 [2024-07-14 04:49:35.656394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2939372 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2939372 /var/tmp/bperf.sock 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2939372 ']' 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:15.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:15.484 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:15.743 [2024-07-14 04:49:35.703031] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:15.743 [2024-07-14 04:49:35.703104] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939372 ] 00:33:15.743 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.743 [2024-07-14 04:49:35.766315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.743 [2024-07-14 04:49:35.859786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.743 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:15.743 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:15.743 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:15.743 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:15.743 04:49:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:16.311 04:49:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.311 04:49:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.572 nvme0n1 00:33:16.572 04:49:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:16.572 04:49:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:16.572 Running I/O for 2 seconds... 00:33:18.481 00:33:18.481 Latency(us) 00:33:18.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.481 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:18.481 nvme0n1 : 2.01 19674.70 76.85 0.00 0.00 6497.15 2815.62 16990.81 00:33:18.481 =================================================================================================================== 00:33:18.481 Total : 19674.70 76.85 0.00 0.00 6497.15 2815.62 16990.81 00:33:18.481 0 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:18.741 | select(.opcode=="crc32c") 00:33:18.741 | "\(.module_name) \(.executed)"' 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2939372 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2939372 ']' 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2939372 00:33:18.741 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:19.000 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:19.000 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2939372 00:33:19.000 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:19.000 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:19.000 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2939372' 00:33:19.000 killing process with pid 2939372 00:33:19.000 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2939372 00:33:19.000 Received shutdown signal, test time was about 2.000000 seconds 00:33:19.000 00:33:19.000 Latency(us) 00:33:19.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.000 =================================================================================================================== 00:33:19.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:19.000 04:49:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2939372 00:33:19.000 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:19.000 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:19.000 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:19.000 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:19.000 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:19.000 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:19.000 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2939775 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2939775 /var/tmp/bperf.sock 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2939775 ']' 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:19.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.258 [2024-07-14 04:49:39.234372] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:19.258 [2024-07-14 04:49:39.234447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939775 ] 00:33:19.258 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:19.258 Zero copy mechanism will not be used. 00:33:19.258 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.258 [2024-07-14 04:49:39.292335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.258 [2024-07-14 04:49:39.377955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:19.258 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:19.824 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.824 04:49:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.081 nvme0n1 00:33:20.081 04:49:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:20.081 04:49:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:20.340 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:20.340 Zero copy mechanism will not be used. 00:33:20.340 Running I/O for 2 seconds... 00:33:22.239 00:33:22.239 Latency(us) 00:33:22.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.239 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:22.239 nvme0n1 : 2.00 2379.77 297.47 0.00 0.00 6718.25 6407.96 11359.57 00:33:22.239 =================================================================================================================== 00:33:22.239 Total : 2379.77 297.47 0.00 0.00 6718.25 6407.96 11359.57 00:33:22.239 0 00:33:22.239 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:22.239 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:22.239 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:22.239 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:22.239 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:22.239 | select(.opcode=="crc32c") 00:33:22.239 | "\(.module_name) \(.executed)"' 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2939775 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2939775 ']' 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2939775 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2939775 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2939775' 00:33:22.499 killing process with pid 2939775 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2939775 00:33:22.499 Received shutdown signal, test time was about 2.000000 seconds 00:33:22.499 00:33:22.499 Latency(us) 00:33:22.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.499 =================================================================================================================== 00:33:22.499 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.499 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2939775 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2940183 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2940183 /var/tmp/bperf.sock 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2940183 ']' 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:22.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:22.758 04:49:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:22.758 [2024-07-14 04:49:42.907984] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:22.758 [2024-07-14 04:49:42.908078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940183 ] 00:33:22.758 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.016 [2024-07-14 04:49:42.971563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.016 [2024-07-14 04:49:43.063465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.016 04:49:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:23.016 04:49:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:23.016 04:49:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:23.016 04:49:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:23.016 04:49:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:23.273 04:49:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.273 04:49:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.839 nvme0n1 00:33:23.839 04:49:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:23.839 04:49:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.839 Running I/O for 2 seconds... 00:33:26.373 00:33:26.373 Latency(us) 00:33:26.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.373 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:26.373 nvme0n1 : 2.01 18388.50 71.83 0.00 0.00 6943.75 2997.67 9806.13 00:33:26.373 =================================================================================================================== 00:33:26.373 Total : 18388.50 71.83 0.00 0.00 6943.75 2997.67 9806.13 00:33:26.373 0 00:33:26.373 04:49:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:26.373 04:49:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:26.373 | select(.opcode=="crc32c") 00:33:26.373 | "\(.module_name) \(.executed)"' 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2940183 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2940183 ']' 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2940183 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2940183 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2940183' 00:33:26.373 killing process with pid 2940183 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2940183 00:33:26.373 Received shutdown signal, test time was about 2.000000 seconds 00:33:26.373 00:33:26.373 Latency(us) 00:33:26.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.373 =================================================================================================================== 00:33:26.373 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2940183 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2940696 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2940696 /var/tmp/bperf.sock 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2940696 ']' 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:26.373 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:26.373 [2024-07-14 04:49:46.533615] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:26.373 [2024-07-14 04:49:46.533712] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940696 ] 00:33:26.373 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:26.373 Zero copy mechanism will not be used. 00:33:26.373 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.631 [2024-07-14 04:49:46.593255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.631 [2024-07-14 04:49:46.681010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.631 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:26.631 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:26.631 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:26.631 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:26.631 04:49:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:27.197 04:49:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.197 04:49:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.454 nvme0n1 00:33:27.454 04:49:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:27.454 04:49:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:27.454 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:27.454 Zero copy mechanism will not be used. 00:33:27.454 Running I/O for 2 seconds... 00:33:29.982 00:33:29.982 Latency(us) 00:33:29.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.982 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:29.982 nvme0n1 : 2.01 1593.42 199.18 0.00 0.00 10014.18 3094.76 12718.84 00:33:29.982 =================================================================================================================== 00:33:29.982 Total : 1593.42 199.18 0.00 0.00 10014.18 3094.76 12718.84 00:33:29.982 0 00:33:29.982 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:29.982 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:29.982 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:29.982 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:29.982 | select(.opcode=="crc32c") 00:33:29.982 | "\(.module_name) \(.executed)"' 00:33:29.982 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2940696 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2940696 ']' 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2940696 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2940696 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2940696' 00:33:29.983 killing process with pid 2940696 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2940696 00:33:29.983 Received shutdown signal, test time was about 2.000000 seconds 00:33:29.983 00:33:29.983 Latency(us) 00:33:29.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.983 =================================================================================================================== 00:33:29.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:29.983 04:49:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2940696 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2939352 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2939352 ']' 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2939352 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2939352 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2939352' 00:33:29.983 killing process with pid 2939352 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2939352 00:33:29.983 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2939352 00:33:30.239 00:33:30.239 real 0m15.157s 00:33:30.239 user 0m30.381s 00:33:30.239 sys 0m3.884s 00:33:30.239 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:30.240 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:30.240 ************************************ 00:33:30.240 END TEST nvmf_digest_clean 00:33:30.240 ************************************ 00:33:30.240 04:49:50 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:30.240 04:49:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:30.240 04:49:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:30.240 04:49:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:30.499 ************************************ 00:33:30.499 START TEST nvmf_digest_error 00:33:30.499 ************************************ 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2941147 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2941147 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2941147 ']' 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:30.499 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.499 [2024-07-14 04:49:50.500774] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:30.499 [2024-07-14 04:49:50.500851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.499 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.499 [2024-07-14 04:49:50.562366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.499 [2024-07-14 04:49:50.645823] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.499 [2024-07-14 04:49:50.645884] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.499 [2024-07-14 04:49:50.645914] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.499 [2024-07-14 04:49:50.645925] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.499 [2024-07-14 04:49:50.645935] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.499 [2024-07-14 04:49:50.645968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.758 [2024-07-14 04:49:50.730558] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:30.758 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.759 null0 00:33:30.759 [2024-07-14 04:49:50.844967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.759 [2024-07-14 04:49:50.869174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2941167 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2941167 /var/tmp/bperf.sock 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2941167 ']' 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:30.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:30.759 04:49:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.759 [2024-07-14 04:49:50.918260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:30.759 [2024-07-14 04:49:50.918332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941167 ] 00:33:30.759 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.017 [2024-07-14 04:49:50.985233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.017 [2024-07-14 04:49:51.080091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.017 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:31.017 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:31.017 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:31.017 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:31.584 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:31.584 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.584 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.584 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.584 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.584 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.855 nvme0n1 00:33:31.855 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:31.855 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.855 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.855 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.855 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:31.855 04:49:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:31.855 Running I/O for 2 seconds... 00:33:31.856 [2024-07-14 04:49:52.030939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:31.856 [2024-07-14 04:49:52.031005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.856 [2024-07-14 04:49:52.031042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.047376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.047436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.047464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.061591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.061632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.061660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.075696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.075743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.075784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.089194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.089239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.089259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.103321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.103366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.103397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.117122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.117184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.117205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.132501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.132539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.132559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.145344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.145380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.145400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.160966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.161007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.161035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.174021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.174055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.174073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.189962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.189996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.190014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.202737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.202780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.202801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.219220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.219256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.219277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.232838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.232885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.232920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.246702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.246749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.246781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.260334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.260371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.260390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.278306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.278346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.278368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.295238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.295295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.295327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.309660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.309695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.309715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.141 [2024-07-14 04:49:52.328089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.141 [2024-07-14 04:49:52.328127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.141 [2024-07-14 04:49:52.328171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.341577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.341613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.341633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.358800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.358845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.358886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.375221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.375265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.375296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.389313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.389349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.389369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.404213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.404267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.404310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.416559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.416596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.416616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.432091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.432134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.432162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.446455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.446500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.446531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.460077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.460131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.460177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.473206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.473254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.473274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.487217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.487273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.487306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.501756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.501801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.501833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.399 [2024-07-14 04:49:52.515273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.399 [2024-07-14 04:49:52.515309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.399 [2024-07-14 04:49:52.515329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.400 [2024-07-14 04:49:52.532848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.400 [2024-07-14 04:49:52.532903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.400 [2024-07-14 04:49:52.532935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.400 [2024-07-14 04:49:52.545125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.400 [2024-07-14 04:49:52.545172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.400 [2024-07-14 04:49:52.545190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.400 [2024-07-14 04:49:52.560072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.400 [2024-07-14 04:49:52.560112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.400 [2024-07-14 04:49:52.560139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.400 [2024-07-14 04:49:52.574365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.400 [2024-07-14 04:49:52.574401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.400 [2024-07-14 04:49:52.574420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.592087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.592129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.592158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.604304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.604340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.604359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.622199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.622243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.622273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.637184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.637230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.637261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.651441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.651478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.651498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.668111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.668143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.668161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.682985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.683025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.683053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.695918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.695948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.695964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.713298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.713342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.713385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.727759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.727804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.727835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.740007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.740057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.740083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.754861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.754929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.754957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.771083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.771124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.771153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.783852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.783895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.783916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.803351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.803387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.803407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.818830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.818885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.818932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.832301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.832338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.832357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.657 [2024-07-14 04:49:52.846707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.657 [2024-07-14 04:49:52.846759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.657 [2024-07-14 04:49:52.846791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.860149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.860198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.860222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.875034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.875074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.875103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.890316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.890360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.890391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.903074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.903113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.903156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.917088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.917129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.917158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.930055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.930093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.930135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.944058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.944088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.944120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.958515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.958560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.958590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.971728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.971774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.971806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:52.988927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:52.988967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:52.988995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:53.002400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:53.002445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:53.002477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:53.018890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:53.018955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:53.018996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:53.032129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:53.032159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:53.032191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:53.047971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:53.048002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:53.048019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:53.062621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:53.062667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:53.062698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:53.076815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:53.076852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:53.076879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:53.094077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:53.094107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:53.094144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.917 [2024-07-14 04:49:53.107764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:32.917 [2024-07-14 04:49:53.107801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.917 [2024-07-14 04:49:53.107821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.125798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.125844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.125887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.138276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.138320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.138351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.155788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.155833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.155873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.172118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.172174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.172215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.186360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.186397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.186417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.205751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.205798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.205830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.218108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.218154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.218172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.236472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.236525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.236556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.249499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.249537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.249556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.267517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.267555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.267575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.284743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.284789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.284820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.298736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.298774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.298794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.316677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.316723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.316754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.329713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.329750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.176 [2024-07-14 04:49:53.329770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.176 [2024-07-14 04:49:53.347279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.176 [2024-07-14 04:49:53.347325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.177 [2024-07-14 04:49:53.347358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.177 [2024-07-14 04:49:53.363257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.177 [2024-07-14 04:49:53.363302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.177 [2024-07-14 04:49:53.363333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.436 [2024-07-14 04:49:53.377595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.436 [2024-07-14 04:49:53.377633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.436 [2024-07-14 04:49:53.377653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.436 [2024-07-14 04:49:53.392000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.436 [2024-07-14 04:49:53.392056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.436 [2024-07-14 04:49:53.392082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.436 [2024-07-14 04:49:53.406041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.436 [2024-07-14 04:49:53.406081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.406110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.418721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.418766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.418798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.432660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.432707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.432741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.446540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.446586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.446617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.461402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.461447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.461478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.474797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.474833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.474852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.490705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.490749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.490787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.504810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.504854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.504896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.517923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.517953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.517984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.533462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.533507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.533538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.547432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.547469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.547489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.563365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.563402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.563422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.577044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.577097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.577125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.590719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.590754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.590774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.603955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.603997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.604025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.437 [2024-07-14 04:49:53.618509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.437 [2024-07-14 04:49:53.618554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.437 [2024-07-14 04:49:53.618586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.631095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.631141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.631157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.644913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.644954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.644984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.659082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.659135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.659163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.676228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.676275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.676306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.687862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.687918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.687935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.705725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.705761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.705781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.723738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.723782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.723817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.737551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.737588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.737614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.756062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.756104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.756131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.768079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.768110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.768127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.785766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.785803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.785822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.798003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.798033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.798064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.813170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.813224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.813255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.825825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.825862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.825891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.841107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.841161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.841192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.699 [2024-07-14 04:49:53.853656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.699 [2024-07-14 04:49:53.853702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.699 [2024-07-14 04:49:53.853734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.700 [2024-07-14 04:49:53.868570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.700 [2024-07-14 04:49:53.868622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.700 [2024-07-14 04:49:53.868654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.700 [2024-07-14 04:49:53.881297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.700 [2024-07-14 04:49:53.881342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.700 [2024-07-14 04:49:53.881375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 [2024-07-14 04:49:53.895502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.959 [2024-07-14 04:49:53.895540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.959 [2024-07-14 04:49:53.895559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 [2024-07-14 04:49:53.909450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.959 [2024-07-14 04:49:53.909495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.959 [2024-07-14 04:49:53.909527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 [2024-07-14 04:49:53.923859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.959 [2024-07-14 04:49:53.923917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.959 [2024-07-14 04:49:53.923935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 [2024-07-14 04:49:53.936938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.959 [2024-07-14 04:49:53.936978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.959 [2024-07-14 04:49:53.937007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 [2024-07-14 04:49:53.950540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.959 [2024-07-14 04:49:53.950585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.959 [2024-07-14 04:49:53.950616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 [2024-07-14 04:49:53.965382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.959 [2024-07-14 04:49:53.965426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.959 [2024-07-14 04:49:53.965457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 [2024-07-14 04:49:53.979778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.959 [2024-07-14 04:49:53.979824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.959 [2024-07-14 04:49:53.979855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 [2024-07-14 04:49:53.992987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.959 [2024-07-14 04:49:53.993019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.959 [2024-07-14 04:49:53.993052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 [2024-07-14 04:49:54.006468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6be8d0) 00:33:33.959 [2024-07-14 04:49:54.006505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.959 [2024-07-14 04:49:54.006524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.959 00:33:33.959 Latency(us) 00:33:33.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.959 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:33.959 nvme0n1 : 2.01 17255.86 67.41 0.00 0.00 7407.41 3422.44 21068.61 00:33:33.959 =================================================================================================================== 00:33:33.959 Total : 17255.86 67.41 0.00 0.00 7407.41 3422.44 21068.61 00:33:33.959 0 00:33:33.959 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:33.959 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:33.959 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:33.959 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:33.959 | .driver_specific 00:33:33.959 | .nvme_error 00:33:33.959 | .status_code 00:33:33.959 | .command_transient_transport_error' 00:33:34.217 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:33:34.217 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2941167 00:33:34.217 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2941167 ']' 00:33:34.217 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2941167 00:33:34.217 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:34.217 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:34.218 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2941167 00:33:34.218 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:34.218 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:34.218 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2941167' 00:33:34.218 killing process with pid 2941167 00:33:34.218 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2941167 00:33:34.218 Received shutdown signal, test time was about 2.000000 seconds 00:33:34.218 00:33:34.218 Latency(us) 00:33:34.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.218 =================================================================================================================== 00:33:34.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:34.218 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2941167 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2941700 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2941700 /var/tmp/bperf.sock 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2941700 ']' 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:34.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:34.476 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.476 [2024-07-14 04:49:54.560013] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:34.476 [2024-07-14 04:49:54.560095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941700 ] 00:33:34.476 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:34.476 Zero copy mechanism will not be used. 00:33:34.476 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.476 [2024-07-14 04:49:54.618482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.734 [2024-07-14 04:49:54.704044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.734 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:34.734 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:34.734 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:34.734 04:49:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:34.992 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:34.992 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.992 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.992 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.992 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:34.992 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.250 nvme0n1 00:33:35.250 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:35.250 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.250 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.250 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.250 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:35.250 04:49:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:35.508 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:35.508 Zero copy mechanism will not be used. 00:33:35.508 Running I/O for 2 seconds... 00:33:35.508 [2024-07-14 04:49:55.531363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.531421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.531445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.508 [2024-07-14 04:49:55.549472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.549510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.549530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.508 [2024-07-14 04:49:55.567171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.567222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.567243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.508 [2024-07-14 04:49:55.585139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.585186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.585204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.508 [2024-07-14 04:49:55.603035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.603067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.603085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.508 [2024-07-14 04:49:55.620793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.620830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.620849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.508 [2024-07-14 04:49:55.638321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.638357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.638376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.508 [2024-07-14 04:49:55.656336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.656372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.656398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.508 [2024-07-14 04:49:55.674124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.674156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.674173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.508 [2024-07-14 04:49:55.691572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.508 [2024-07-14 04:49:55.691607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.508 [2024-07-14 04:49:55.691626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.767 [2024-07-14 04:49:55.709441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.709477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.709497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.727185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.727235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.727255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.744649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.744685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.744704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.762547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.762582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.762602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.779530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.779564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.779583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.796617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.796652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.796670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.813600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.813633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.813651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.830783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.830816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.830834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.848441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.848474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.848492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.865401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.865433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.865451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.882532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.882565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.882583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.899422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.899453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.899471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.915690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.915723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.915741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.932502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.932535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.932552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.768 [2024-07-14 04:49:55.949307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:35.768 [2024-07-14 04:49:55.949339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.768 [2024-07-14 04:49:55.949363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:55.966191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:55.966237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:55.966253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:55.982964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:55.982996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:55.983013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:55.999964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:55.999996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.000014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.016486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.016518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.016536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.033042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.033074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.033091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.049489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.049521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.049539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.065654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.065687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.065705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.082523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.082556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.082574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.099211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.099251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.099270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.116122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.116154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.116171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.132786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.132819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.132837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.149533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.149566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.149585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.166999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.167030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.167047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.183696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.183729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.183747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.200831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.200876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.200901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.026 [2024-07-14 04:49:56.217372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.026 [2024-07-14 04:49:56.217403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.026 [2024-07-14 04:49:56.217420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.233632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.233664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.233681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.249222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.249252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.249269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.264905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.264937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.264954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.280893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.280926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.280943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.296607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.296639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.296655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.312253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.312283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.312300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.327953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.327999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.328016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.343947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.343993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.344010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.360112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.360161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.360178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.376321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.376352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.376375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.392125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.392171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.392188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.408590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.408622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.408639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.424778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.424809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.424826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.441746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.441779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.441797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.458462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.458496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.458515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.284 [2024-07-14 04:49:56.475123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.284 [2024-07-14 04:49:56.475155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.284 [2024-07-14 04:49:56.475171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.492030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.492076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.492093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.508781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.508815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.508833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.525360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.525393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.525411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.542005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.542038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.542055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.558732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.558765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.558784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.575575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.575607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.575625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.592607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.592658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.592686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.608907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.608956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.608973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.626331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.626365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.626384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.643996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.644027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.644045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.661168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.661221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.661254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.678804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.678838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.678857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.696360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.696395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.696414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.714052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.714084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.714101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.543 [2024-07-14 04:49:56.731087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.543 [2024-07-14 04:49:56.731117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.543 [2024-07-14 04:49:56.731134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.802 [2024-07-14 04:49:56.748293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.802 [2024-07-14 04:49:56.748328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.802 [2024-07-14 04:49:56.748347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.802 [2024-07-14 04:49:56.765613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.802 [2024-07-14 04:49:56.765646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.802 [2024-07-14 04:49:56.765665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.802 [2024-07-14 04:49:56.782991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.802 [2024-07-14 04:49:56.783023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.802 [2024-07-14 04:49:56.783041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.802 [2024-07-14 04:49:56.800046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.802 [2024-07-14 04:49:56.800094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.802 [2024-07-14 04:49:56.800111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.802 [2024-07-14 04:49:56.817167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.802 [2024-07-14 04:49:56.817220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.802 [2024-07-14 04:49:56.817239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.802 [2024-07-14 04:49:56.835042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.802 [2024-07-14 04:49:56.835088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.802 [2024-07-14 04:49:56.835104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.802 [2024-07-14 04:49:56.853020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.802 [2024-07-14 04:49:56.853066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.802 [2024-07-14 04:49:56.853083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.802 [2024-07-14 04:49:56.870203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.802 [2024-07-14 04:49:56.870238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.802 [2024-07-14 04:49:56.870256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.802 [2024-07-14 04:49:56.887878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.803 [2024-07-14 04:49:56.887925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.803 [2024-07-14 04:49:56.887942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.803 [2024-07-14 04:49:56.904932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.803 [2024-07-14 04:49:56.904964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.803 [2024-07-14 04:49:56.904981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.803 [2024-07-14 04:49:56.922001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.803 [2024-07-14 04:49:56.922032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.803 [2024-07-14 04:49:56.922049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.803 [2024-07-14 04:49:56.938955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.803 [2024-07-14 04:49:56.939002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.803 [2024-07-14 04:49:56.939019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.803 [2024-07-14 04:49:56.955619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.803 [2024-07-14 04:49:56.955652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.803 [2024-07-14 04:49:56.955670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.803 [2024-07-14 04:49:56.972233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.803 [2024-07-14 04:49:56.972279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.803 [2024-07-14 04:49:56.972296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.803 [2024-07-14 04:49:56.989133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:36.803 [2024-07-14 04:49:56.989164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.803 [2024-07-14 04:49:56.989196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.006353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.006387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.063 [2024-07-14 04:49:57.006405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.023243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.023276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.063 [2024-07-14 04:49:57.023295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.040643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.040676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.063 [2024-07-14 04:49:57.040694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.057221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.057267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.063 [2024-07-14 04:49:57.057283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.074238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.074272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.063 [2024-07-14 04:49:57.074290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.090752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.090786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.063 [2024-07-14 04:49:57.090804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.107356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.107395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.063 [2024-07-14 04:49:57.107413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.124492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.124525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.063 [2024-07-14 04:49:57.124543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.141124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.141170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.063 [2024-07-14 04:49:57.141188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.063 [2024-07-14 04:49:57.157878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.063 [2024-07-14 04:49:57.157910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.064 [2024-07-14 04:49:57.157927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.064 [2024-07-14 04:49:57.174147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.064 [2024-07-14 04:49:57.174178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.064 [2024-07-14 04:49:57.174196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.064 [2024-07-14 04:49:57.190738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.064 [2024-07-14 04:49:57.190771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.064 [2024-07-14 04:49:57.190788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.064 [2024-07-14 04:49:57.207027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.064 [2024-07-14 04:49:57.207072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.064 [2024-07-14 04:49:57.207090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.064 [2024-07-14 04:49:57.223398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.064 [2024-07-14 04:49:57.223431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.064 [2024-07-14 04:49:57.223449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.064 [2024-07-14 04:49:57.240272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.064 [2024-07-14 04:49:57.240305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.064 [2024-07-14 04:49:57.240323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.257125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.257173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.257191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.274137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.274183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.274202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.291153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.291200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.291217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.308249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.308283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.308302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.325311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.325344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.325362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.342532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.342565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.342582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.359019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.359067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.359086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.375912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.375945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.375962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.392613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.392646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.392671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.409174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.409222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.409240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.426044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.426075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.426092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.443138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.443185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.443203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.460147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.460178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.460210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.476961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.477007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.477023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.493801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.493835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.493853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.325 [2024-07-14 04:49:57.510402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fd2c0) 00:33:37.325 [2024-07-14 04:49:57.510435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.325 [2024-07-14 04:49:57.510453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.585 00:33:37.585 Latency(us) 00:33:37.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.585 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:37.585 nvme0n1 : 2.01 1829.02 228.63 0.00 0.00 8742.90 7621.59 18350.08 00:33:37.585 =================================================================================================================== 00:33:37.585 Total : 1829.02 228.63 0.00 0.00 8742.90 7621.59 18350.08 00:33:37.585 0 00:33:37.585 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:37.585 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:37.585 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:37.585 | .driver_specific 00:33:37.585 | .nvme_error 00:33:37.585 | .status_code 00:33:37.585 | .command_transient_transport_error' 00:33:37.585 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2941700 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2941700 ']' 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2941700 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2941700 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2941700' 00:33:37.845 killing process with pid 2941700 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2941700 00:33:37.845 Received shutdown signal, test time was about 2.000000 seconds 00:33:37.845 00:33:37.845 Latency(us) 00:33:37.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.845 =================================================================================================================== 00:33:37.845 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:37.845 04:49:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2941700 00:33:38.105 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:38.105 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2942101 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2942101 /var/tmp/bperf.sock 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2942101 ']' 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:38.106 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.106 [2024-07-14 04:49:58.083170] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:38.106 [2024-07-14 04:49:58.083250] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942101 ] 00:33:38.106 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.106 [2024-07-14 04:49:58.144044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.106 [2024-07-14 04:49:58.233675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.365 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:38.365 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:38.365 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:38.365 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:38.623 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:38.623 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.623 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.623 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.623 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.623 04:49:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.882 nvme0n1 00:33:38.882 04:49:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:38.882 04:49:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.882 04:49:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.882 04:49:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.882 04:49:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:38.882 04:49:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:39.142 Running I/O for 2 seconds... 00:33:39.142 [2024-07-14 04:49:59.178092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f8a50 00:33:39.142 [2024-07-14 04:49:59.179038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.179082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.189378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dfdc0 00:33:39.142 [2024-07-14 04:49:59.190241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.190271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.202617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ef270 00:33:39.142 [2024-07-14 04:49:59.203726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.203764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.214531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0350 00:33:39.142 [2024-07-14 04:49:59.215574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.215603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.226373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f1430 00:33:39.142 [2024-07-14 04:49:59.227490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.227518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.238107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f2510 00:33:39.142 [2024-07-14 04:49:59.239157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.239186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.249784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f35f0 00:33:39.142 [2024-07-14 04:49:59.250824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.250853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.261499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f46d0 00:33:39.142 [2024-07-14 04:49:59.262633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.262662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.273168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fdeb0 00:33:39.142 [2024-07-14 04:49:59.274200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.274228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.284833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fef90 00:33:39.142 [2024-07-14 04:49:59.285917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.285946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.296668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc998 00:33:39.142 [2024-07-14 04:49:59.297706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.297734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.308378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e27f0 00:33:39.142 [2024-07-14 04:49:59.309523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.309551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.320053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190de8a8 00:33:39.142 [2024-07-14 04:49:59.321097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.321126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.142 [2024-07-14 04:49:59.331914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190df988 00:33:39.142 [2024-07-14 04:49:59.333032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.142 [2024-07-14 04:49:59.333060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.343874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e0a68 00:33:39.403 [2024-07-14 04:49:59.344902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.344930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.355659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e1b48 00:33:39.403 [2024-07-14 04:49:59.356699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.356726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.367430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e88f8 00:33:39.403 [2024-07-14 04:49:59.368547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.368575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.379133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190edd58 00:33:39.403 [2024-07-14 04:49:59.380186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.380214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.390798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eee38 00:33:39.403 [2024-07-14 04:49:59.391836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.391864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.402625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eff18 00:33:39.403 [2024-07-14 04:49:59.403734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.403762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.414316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0ff8 00:33:39.403 [2024-07-14 04:49:59.415404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.415432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.425978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f20d8 00:33:39.403 [2024-07-14 04:49:59.427040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.427067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.437738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f31b8 00:33:39.403 [2024-07-14 04:49:59.438780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.438808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.449462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f4298 00:33:39.403 [2024-07-14 04:49:59.450615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.450643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.461533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f5378 00:33:39.403 [2024-07-14 04:49:59.462674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.462706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.474239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fe2e8 00:33:39.403 [2024-07-14 04:49:59.475389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.475419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.486941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190feb58 00:33:39.403 [2024-07-14 04:49:59.488092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.488119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.499636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc560 00:33:39.403 [2024-07-14 04:49:59.500814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.500845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.512362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ddc00 00:33:39.403 [2024-07-14 04:49:59.513496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.513534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.525379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dece0 00:33:39.403 [2024-07-14 04:49:59.526527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.526558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.537959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dfdc0 00:33:39.403 [2024-07-14 04:49:59.539107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.539134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.550648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e0ea0 00:33:39.403 [2024-07-14 04:49:59.551779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.551810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.563302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e1f80 00:33:39.403 [2024-07-14 04:49:59.564439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.564470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.575846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e84c0 00:33:39.403 [2024-07-14 04:49:59.577018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.577046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.403 [2024-07-14 04:49:59.588613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ee190 00:33:39.403 [2024-07-14 04:49:59.589767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.403 [2024-07-14 04:49:59.589797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.665 [2024-07-14 04:49:59.601573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ef270 00:33:39.665 [2024-07-14 04:49:59.602717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.665 [2024-07-14 04:49:59.602748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.665 [2024-07-14 04:49:59.614226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0350 00:33:39.665 [2024-07-14 04:49:59.615354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.665 [2024-07-14 04:49:59.615384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.665 [2024-07-14 04:49:59.626854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f1430 00:33:39.665 [2024-07-14 04:49:59.628022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.665 [2024-07-14 04:49:59.628049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.665 [2024-07-14 04:49:59.639593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f2510 00:33:39.665 [2024-07-14 04:49:59.640724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.665 [2024-07-14 04:49:59.640755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.665 [2024-07-14 04:49:59.652303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f35f0 00:33:39.666 [2024-07-14 04:49:59.653429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.653460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.664943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f46d0 00:33:39.666 [2024-07-14 04:49:59.666092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.666119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.677645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fdeb0 00:33:39.666 [2024-07-14 04:49:59.678795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.678826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.690295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fef90 00:33:39.666 [2024-07-14 04:49:59.691469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.691497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.702801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc998 00:33:39.666 [2024-07-14 04:49:59.703985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.704013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.715601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e27f0 00:33:39.666 [2024-07-14 04:49:59.716762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.716792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.728265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190de8a8 00:33:39.666 [2024-07-14 04:49:59.729399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.729430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.740973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190df988 00:33:39.666 [2024-07-14 04:49:59.742125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.742152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.753694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e0a68 00:33:39.666 [2024-07-14 04:49:59.754885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.754928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.766458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e1b48 00:33:39.666 [2024-07-14 04:49:59.767578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.767609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.778958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e88f8 00:33:39.666 [2024-07-14 04:49:59.780197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.780228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.791600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190edd58 00:33:39.666 [2024-07-14 04:49:59.792732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.792763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.804313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eee38 00:33:39.666 [2024-07-14 04:49:59.805457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.805487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.817026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eff18 00:33:39.666 [2024-07-14 04:49:59.818173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.818204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.829640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0ff8 00:33:39.666 [2024-07-14 04:49:59.830823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.830854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.842342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f20d8 00:33:39.666 [2024-07-14 04:49:59.843482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.843518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.666 [2024-07-14 04:49:59.855138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f31b8 00:33:39.666 [2024-07-14 04:49:59.856346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.666 [2024-07-14 04:49:59.856377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.868058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f4298 00:33:39.926 [2024-07-14 04:49:59.869296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.869326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.880648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f5378 00:33:39.926 [2024-07-14 04:49:59.881783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.881813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.893324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fe2e8 00:33:39.926 [2024-07-14 04:49:59.894462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.894493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.905888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190feb58 00:33:39.926 [2024-07-14 04:49:59.907096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.907124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.918591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc560 00:33:39.926 [2024-07-14 04:49:59.919721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.919751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.931280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ddc00 00:33:39.926 [2024-07-14 04:49:59.932399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.932430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.943784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dece0 00:33:39.926 [2024-07-14 04:49:59.944946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.944973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.956322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dfdc0 00:33:39.926 [2024-07-14 04:49:59.957470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.957500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.969069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e0ea0 00:33:39.926 [2024-07-14 04:49:59.970227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.970257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.981737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e1f80 00:33:39.926 [2024-07-14 04:49:59.982887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.982932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:49:59.994445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e84c0 00:33:39.926 [2024-07-14 04:49:59.995595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:49:59.995626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:50:00.007407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ee190 00:33:39.926 [2024-07-14 04:50:00.008569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:50:00.008603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:50:00.020306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ef270 00:33:39.926 [2024-07-14 04:50:00.021494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:50:00.021527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:50:00.033221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0350 00:33:39.926 [2024-07-14 04:50:00.034386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:50:00.034417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:50:00.046154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f1430 00:33:39.926 [2024-07-14 04:50:00.047377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:50:00.047411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:50:00.059044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f2510 00:33:39.926 [2024-07-14 04:50:00.060173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:50:00.060217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:50:00.071447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f35f0 00:33:39.926 [2024-07-14 04:50:00.072602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:50:00.072634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:50:00.084189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f46d0 00:33:39.926 [2024-07-14 04:50:00.085317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:50:00.085349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:50:00.097102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fdeb0 00:33:39.926 [2024-07-14 04:50:00.098281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:50:00.098312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:39.926 [2024-07-14 04:50:00.109980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fef90 00:33:39.926 [2024-07-14 04:50:00.111118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.926 [2024-07-14 04:50:00.111151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.123051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc998 00:33:40.186 [2024-07-14 04:50:00.124186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.124230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.135700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e27f0 00:33:40.186 [2024-07-14 04:50:00.136845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.136886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.148361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190de8a8 00:33:40.186 [2024-07-14 04:50:00.149491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.149523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.161038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190df988 00:33:40.186 [2024-07-14 04:50:00.162230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.162258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.173626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e0a68 00:33:40.186 [2024-07-14 04:50:00.174771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.174807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.186365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e1b48 00:33:40.186 [2024-07-14 04:50:00.187504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.187535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.198980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e88f8 00:33:40.186 [2024-07-14 04:50:00.200122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.200150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.211510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190edd58 00:33:40.186 [2024-07-14 04:50:00.212664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.212695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.224236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eee38 00:33:40.186 [2024-07-14 04:50:00.225392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.225423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.236849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eff18 00:33:40.186 [2024-07-14 04:50:00.238040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.238069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.249580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0ff8 00:33:40.186 [2024-07-14 04:50:00.250710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.250741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.262224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f20d8 00:33:40.186 [2024-07-14 04:50:00.263355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.263386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.274812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f31b8 00:33:40.186 [2024-07-14 04:50:00.275983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.276011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.287570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f4298 00:33:40.186 [2024-07-14 04:50:00.288699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.288730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.300318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f5378 00:33:40.186 [2024-07-14 04:50:00.301449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.301480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.312924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fe2e8 00:33:40.186 [2024-07-14 04:50:00.314092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.314120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.325603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190feb58 00:33:40.186 [2024-07-14 04:50:00.326733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.186 [2024-07-14 04:50:00.326763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.186 [2024-07-14 04:50:00.338284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc560 00:33:40.187 [2024-07-14 04:50:00.339402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.187 [2024-07-14 04:50:00.339432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.187 [2024-07-14 04:50:00.350886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ddc00 00:33:40.187 [2024-07-14 04:50:00.352088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.187 [2024-07-14 04:50:00.352115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.187 [2024-07-14 04:50:00.363586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dece0 00:33:40.187 [2024-07-14 04:50:00.364724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.187 [2024-07-14 04:50:00.364754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.187 [2024-07-14 04:50:00.376418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dfdc0 00:33:40.447 [2024-07-14 04:50:00.377640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.447 [2024-07-14 04:50:00.377672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.447 [2024-07-14 04:50:00.389317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e0ea0 00:33:40.447 [2024-07-14 04:50:00.390459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.447 [2024-07-14 04:50:00.390489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.447 [2024-07-14 04:50:00.402076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e1f80 00:33:40.447 [2024-07-14 04:50:00.403290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.447 [2024-07-14 04:50:00.403320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.447 [2024-07-14 04:50:00.414672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e84c0 00:33:40.447 [2024-07-14 04:50:00.415814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.447 [2024-07-14 04:50:00.415844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.447 [2024-07-14 04:50:00.427313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ee190 00:33:40.447 [2024-07-14 04:50:00.428462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.447 [2024-07-14 04:50:00.428493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.447 [2024-07-14 04:50:00.439968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ef270 00:33:40.447 [2024-07-14 04:50:00.441109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.447 [2024-07-14 04:50:00.441137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.447 [2024-07-14 04:50:00.452627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0350 00:33:40.447 [2024-07-14 04:50:00.453725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.447 [2024-07-14 04:50:00.453752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.447 [2024-07-14 04:50:00.464987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f1430 00:33:40.447 [2024-07-14 04:50:00.466027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.466055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.477441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f2510 00:33:40.448 [2024-07-14 04:50:00.478590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.478621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.490171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f35f0 00:33:40.448 [2024-07-14 04:50:00.491338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.491364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.501883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f46d0 00:33:40.448 [2024-07-14 04:50:00.502901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.502941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.513648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fdeb0 00:33:40.448 [2024-07-14 04:50:00.514786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.514814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.525444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fef90 00:33:40.448 [2024-07-14 04:50:00.526579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.526606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.537493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc998 00:33:40.448 [2024-07-14 04:50:00.538636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.538663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.549272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e27f0 00:33:40.448 [2024-07-14 04:50:00.550395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.550421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.561088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190de8a8 00:33:40.448 [2024-07-14 04:50:00.562138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.562166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.572864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190df988 00:33:40.448 [2024-07-14 04:50:00.573967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.573994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.584702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e0a68 00:33:40.448 [2024-07-14 04:50:00.585775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.585803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.596462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e1b48 00:33:40.448 [2024-07-14 04:50:00.597575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.597601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.608330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e88f8 00:33:40.448 [2024-07-14 04:50:00.609404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.609431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.620036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190edd58 00:33:40.448 [2024-07-14 04:50:00.621078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.621105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.448 [2024-07-14 04:50:00.631680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eee38 00:33:40.448 [2024-07-14 04:50:00.632833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.448 [2024-07-14 04:50:00.632861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.708 [2024-07-14 04:50:00.643882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eff18 00:33:40.708 [2024-07-14 04:50:00.644993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.645021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.655570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0ff8 00:33:40.709 [2024-07-14 04:50:00.656628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.656655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.667266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f20d8 00:33:40.709 [2024-07-14 04:50:00.668491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.668519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.679170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f31b8 00:33:40.709 [2024-07-14 04:50:00.680312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.680339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.690931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f4298 00:33:40.709 [2024-07-14 04:50:00.692017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.692045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.702576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f5378 00:33:40.709 [2024-07-14 04:50:00.703680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.703707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.714304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fe2e8 00:33:40.709 [2024-07-14 04:50:00.715444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.715470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.726066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190feb58 00:33:40.709 [2024-07-14 04:50:00.727114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.727141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.737650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc560 00:33:40.709 [2024-07-14 04:50:00.738771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.738799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.749398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ddc00 00:33:40.709 [2024-07-14 04:50:00.750524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.750552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.761130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dece0 00:33:40.709 [2024-07-14 04:50:00.762177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.762204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.772706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dfdc0 00:33:40.709 [2024-07-14 04:50:00.773795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.773821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.784404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e0ea0 00:33:40.709 [2024-07-14 04:50:00.785531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.785559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.796091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e1f80 00:33:40.709 [2024-07-14 04:50:00.797142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.797183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.807830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e84c0 00:33:40.709 [2024-07-14 04:50:00.808893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.808926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.819525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ee190 00:33:40.709 [2024-07-14 04:50:00.820563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.820590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.831282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ef270 00:33:40.709 [2024-07-14 04:50:00.832410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.843002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0350 00:33:40.709 [2024-07-14 04:50:00.844070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.844098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.854623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f1430 00:33:40.709 [2024-07-14 04:50:00.855666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.855693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.866246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f2510 00:33:40.709 [2024-07-14 04:50:00.867304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.867330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.877912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f35f0 00:33:40.709 [2024-07-14 04:50:00.878930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.878957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.709 [2024-07-14 04:50:00.889557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f46d0 00:33:40.709 [2024-07-14 04:50:00.890641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.709 [2024-07-14 04:50:00.890668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.968 [2024-07-14 04:50:00.901629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fdeb0 00:33:40.968 [2024-07-14 04:50:00.902733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.968 [2024-07-14 04:50:00.902760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.968 [2024-07-14 04:50:00.913554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fef90 00:33:40.968 [2024-07-14 04:50:00.914686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.968 [2024-07-14 04:50:00.914714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.968 [2024-07-14 04:50:00.925346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc998 00:33:40.968 [2024-07-14 04:50:00.926479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.968 [2024-07-14 04:50:00.926506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.968 [2024-07-14 04:50:00.937071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e27f0 00:33:40.968 [2024-07-14 04:50:00.938113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.968 [2024-07-14 04:50:00.938140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.968 [2024-07-14 04:50:00.948752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190de8a8 00:33:40.968 [2024-07-14 04:50:00.949773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.968 [2024-07-14 04:50:00.949800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.968 [2024-07-14 04:50:00.960490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190df988 00:33:40.969 [2024-07-14 04:50:00.961533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:00.961561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:00.972243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e0a68 00:33:40.969 [2024-07-14 04:50:00.973345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:00.973372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:00.984027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e1b48 00:33:40.969 [2024-07-14 04:50:00.985034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:00.985062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:00.995672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190e88f8 00:33:40.969 [2024-07-14 04:50:00.996767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:00.996795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.007332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190edd58 00:33:40.969 [2024-07-14 04:50:01.008375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.008402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.018992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eee38 00:33:40.969 [2024-07-14 04:50:01.020035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.020063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.030691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190eff18 00:33:40.969 [2024-07-14 04:50:01.031791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.031818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.042383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f0ff8 00:33:40.969 [2024-07-14 04:50:01.043463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.043490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.054040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f20d8 00:33:40.969 [2024-07-14 04:50:01.055087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.055114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.065675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f31b8 00:33:40.969 [2024-07-14 04:50:01.066795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.066823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.077423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f4298 00:33:40.969 [2024-07-14 04:50:01.078560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.078587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.089131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190f5378 00:33:40.969 [2024-07-14 04:50:01.090176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.090203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.100788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fe2e8 00:33:40.969 [2024-07-14 04:50:01.101803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.101830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.112460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190feb58 00:33:40.969 [2024-07-14 04:50:01.113604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.113638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.124213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190fc560 00:33:40.969 [2024-07-14 04:50:01.125281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.125308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.135962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190ddc00 00:33:40.969 [2024-07-14 04:50:01.137027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.137055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:40.969 [2024-07-14 04:50:01.147675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dece0 00:33:40.969 [2024-07-14 04:50:01.148771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.969 [2024-07-14 04:50:01.148799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.227 [2024-07-14 04:50:01.159872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329910) with pdu=0x2000190dfdc0 00:33:41.227 [2024-07-14 04:50:01.160915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.227 [2024-07-14 04:50:01.160943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.227 00:33:41.227 Latency(us) 00:33:41.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.227 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:41.227 nvme0n1 : 2.00 20812.11 81.30 0.00 0.00 6140.42 2924.85 17282.09 00:33:41.227 =================================================================================================================== 00:33:41.227 Total : 20812.11 81.30 0.00 0.00 6140.42 2924.85 17282.09 00:33:41.227 0 00:33:41.227 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:41.227 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:41.227 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:41.227 | .driver_specific 00:33:41.227 | .nvme_error 00:33:41.227 | .status_code 00:33:41.227 | .command_transient_transport_error' 00:33:41.227 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2942101 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2942101 ']' 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2942101 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2942101 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2942101' 00:33:41.486 killing process with pid 2942101 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2942101 00:33:41.486 Received shutdown signal, test time was about 2.000000 seconds 00:33:41.486 00:33:41.486 Latency(us) 00:33:41.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.486 =================================================================================================================== 00:33:41.486 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.486 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2942101 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2942510 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2942510 /var/tmp/bperf.sock 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2942510 ']' 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:41.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:41.746 04:50:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.746 [2024-07-14 04:50:01.739173] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:41.747 [2024-07-14 04:50:01.739247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942510 ] 00:33:41.747 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:41.747 Zero copy mechanism will not be used. 00:33:41.747 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.747 [2024-07-14 04:50:01.801189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.747 [2024-07-14 04:50:01.891460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.004 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:42.004 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:42.004 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:42.004 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:42.262 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:42.262 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.262 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.262 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.262 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.262 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.832 nvme0n1 00:33:42.832 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:42.832 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.832 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.832 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.832 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:42.832 04:50:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:42.832 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:42.832 Zero copy mechanism will not be used. 00:33:42.832 Running I/O for 2 seconds... 00:33:42.832 [2024-07-14 04:50:02.922748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:42.832 [2024-07-14 04:50:02.923176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.832 [2024-07-14 04:50:02.923241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.832 [2024-07-14 04:50:02.942220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:42.832 [2024-07-14 04:50:02.942759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.832 [2024-07-14 04:50:02.942805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.832 [2024-07-14 04:50:02.960676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:42.832 [2024-07-14 04:50:02.961073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.832 [2024-07-14 04:50:02.961105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.832 [2024-07-14 04:50:02.980200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:42.832 [2024-07-14 04:50:02.980532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.832 [2024-07-14 04:50:02.980562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.832 [2024-07-14 04:50:02.999698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:42.832 [2024-07-14 04:50:03.000104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.832 [2024-07-14 04:50:03.000135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.832 [2024-07-14 04:50:03.019713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:42.832 [2024-07-14 04:50:03.020145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.832 [2024-07-14 04:50:03.020177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.039969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.040380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.040409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.058531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.058905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.058934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.075665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.076051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.076096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.092788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.093188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.093219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.113784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.114384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.114427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.133995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.134563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.134592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.153370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.153781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.153826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.173279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.173675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.173714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.192573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.193105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.193135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.213995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.214598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.214641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.233725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.092 [2024-07-14 04:50:03.234279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.092 [2024-07-14 04:50:03.234308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.092 [2024-07-14 04:50:03.253294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.093 [2024-07-14 04:50:03.253705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.093 [2024-07-14 04:50:03.253735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.093 [2024-07-14 04:50:03.271436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.093 [2024-07-14 04:50:03.271930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.093 [2024-07-14 04:50:03.271960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.351 [2024-07-14 04:50:03.289516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.351 [2024-07-14 04:50:03.289939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.351 [2024-07-14 04:50:03.289968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.351 [2024-07-14 04:50:03.308845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.351 [2024-07-14 04:50:03.309251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.351 [2024-07-14 04:50:03.309280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.351 [2024-07-14 04:50:03.328365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.351 [2024-07-14 04:50:03.328751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.351 [2024-07-14 04:50:03.328780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.351 [2024-07-14 04:50:03.347714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.348104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.348141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.368058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.368622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.368651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.388661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.389061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.389090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.406952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.407319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.407348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.426678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.427070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.427100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.447351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.447843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.447896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.466189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.466586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.466616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.483932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.484315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.484344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.502137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.502529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.502572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.521136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.521541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.521570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.352 [2024-07-14 04:50:03.541196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.352 [2024-07-14 04:50:03.541580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.352 [2024-07-14 04:50:03.541612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.560009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.560541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.560570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.581488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.581989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.582040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.601547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.602032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.602061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.621989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.622534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.622563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.642745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.643146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.643190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.661988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.662559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.662587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.682512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.682924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.682959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.697462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.697843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.697881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.715080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.715447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.715493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.611 [2024-07-14 04:50:03.735685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.611 [2024-07-14 04:50:03.736091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.611 [2024-07-14 04:50:03.736129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.612 [2024-07-14 04:50:03.755974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.612 [2024-07-14 04:50:03.756436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.612 [2024-07-14 04:50:03.756464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.612 [2024-07-14 04:50:03.773816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.612 [2024-07-14 04:50:03.774365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.612 [2024-07-14 04:50:03.774408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.612 [2024-07-14 04:50:03.793035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.612 [2024-07-14 04:50:03.793573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.612 [2024-07-14 04:50:03.793601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.812736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.813106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.813135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.833187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.833692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.833720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.852538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.853100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.853130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.872961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.873362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.873408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.893088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.893557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.893585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.913619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.914100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.914137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.930677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.931099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.931145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.950120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.950530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.950563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.969873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.970403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.970447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:03.989761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:03.990131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:03.990162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:04.008228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:04.008612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:04.008656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:04.028300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:04.028808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:04.028836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.871 [2024-07-14 04:50:04.045637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:43.871 [2024-07-14 04:50:04.046013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.871 [2024-07-14 04:50:04.046058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.131 [2024-07-14 04:50:04.064384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.131 [2024-07-14 04:50:04.064768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.131 [2024-07-14 04:50:04.064796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.131 [2024-07-14 04:50:04.084700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.131 [2024-07-14 04:50:04.085209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.131 [2024-07-14 04:50:04.085238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.131 [2024-07-14 04:50:04.104846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.131 [2024-07-14 04:50:04.105385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.131 [2024-07-14 04:50:04.105413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.131 [2024-07-14 04:50:04.124765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.131 [2024-07-14 04:50:04.125129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.131 [2024-07-14 04:50:04.125157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.131 [2024-07-14 04:50:04.145244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.131 [2024-07-14 04:50:04.145722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.131 [2024-07-14 04:50:04.145765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.131 [2024-07-14 04:50:04.167086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.131 [2024-07-14 04:50:04.167635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.131 [2024-07-14 04:50:04.167677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.131 [2024-07-14 04:50:04.186591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.131 [2024-07-14 04:50:04.186921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.131 [2024-07-14 04:50:04.186956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.131 [2024-07-14 04:50:04.208261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.131 [2024-07-14 04:50:04.208664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.131 [2024-07-14 04:50:04.208692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.131 [2024-07-14 04:50:04.229214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.131 [2024-07-14 04:50:04.229931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.131 [2024-07-14 04:50:04.229960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.132 [2024-07-14 04:50:04.248349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.132 [2024-07-14 04:50:04.248870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.132 [2024-07-14 04:50:04.248897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.132 [2024-07-14 04:50:04.267674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.132 [2024-07-14 04:50:04.268076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.132 [2024-07-14 04:50:04.268104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.132 [2024-07-14 04:50:04.287499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.132 [2024-07-14 04:50:04.287886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.132 [2024-07-14 04:50:04.287932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.132 [2024-07-14 04:50:04.306071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.132 [2024-07-14 04:50:04.306506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.132 [2024-07-14 04:50:04.306533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.390 [2024-07-14 04:50:04.326295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.390 [2024-07-14 04:50:04.326928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.390 [2024-07-14 04:50:04.326956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.390 [2024-07-14 04:50:04.345378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.390 [2024-07-14 04:50:04.345780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.390 [2024-07-14 04:50:04.345821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.390 [2024-07-14 04:50:04.365141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.390 [2024-07-14 04:50:04.365797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.365824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.383919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.384439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.404968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.405433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.405475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.424097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.424721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.424748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.445719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.446091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.446119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.466036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.466484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.466512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.487041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.487525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.487552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.508393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.508986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.509013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.527543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.527929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.527971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.549062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.549666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.549693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.391 [2024-07-14 04:50:04.570082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.391 [2024-07-14 04:50:04.570518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.391 [2024-07-14 04:50:04.570545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.588980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.589346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.589389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.607101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.607775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.607801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.630909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.631449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.631476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.650391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.650917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.650960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.670680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.671319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.671360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.690574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.691046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.691086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.711787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.712367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.712419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.732555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.732986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.733015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.751152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.751565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.751608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.771124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.771602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.771643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.791713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.792301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.792346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.811998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.812375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.812416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.650 [2024-07-14 04:50:04.831474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.650 [2024-07-14 04:50:04.831936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.650 [2024-07-14 04:50:04.831979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.937 [2024-07-14 04:50:04.852220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.937 [2024-07-14 04:50:04.852749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.937 [2024-07-14 04:50:04.852778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.937 [2024-07-14 04:50:04.871967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.937 [2024-07-14 04:50:04.872485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.937 [2024-07-14 04:50:04.872512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.937 [2024-07-14 04:50:04.892519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2329c50) with pdu=0x2000190fef90 00:33:44.937 [2024-07-14 04:50:04.892901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.937 [2024-07-14 04:50:04.892944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.937 00:33:44.937 Latency(us) 00:33:44.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.937 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:44.937 nvme0n1 : 2.01 1570.65 196.33 0.00 0.00 10158.64 6990.51 21748.24 00:33:44.937 =================================================================================================================== 00:33:44.937 Total : 1570.65 196.33 0.00 0.00 10158.64 6990.51 21748.24 00:33:44.937 0 00:33:44.937 04:50:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:44.937 04:50:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:44.937 04:50:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:44.937 04:50:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:44.937 | .driver_specific 00:33:44.937 | .nvme_error 00:33:44.937 | .status_code 00:33:44.937 | .command_transient_transport_error' 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 101 > 0 )) 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2942510 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2942510 ']' 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2942510 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2942510 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2942510' 00:33:45.197 killing process with pid 2942510 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2942510 00:33:45.197 Received shutdown signal, test time was about 2.000000 seconds 00:33:45.197 00:33:45.197 Latency(us) 00:33:45.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.197 =================================================================================================================== 00:33:45.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.197 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2942510 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2941147 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2941147 ']' 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2941147 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2941147 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2941147' 00:33:45.457 killing process with pid 2941147 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2941147 00:33:45.457 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2941147 00:33:45.717 00:33:45.717 real 0m15.239s 00:33:45.717 user 0m30.845s 00:33:45.717 sys 0m3.832s 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.717 ************************************ 00:33:45.717 END TEST nvmf_digest_error 00:33:45.717 ************************************ 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:45.717 rmmod nvme_tcp 00:33:45.717 rmmod nvme_fabrics 00:33:45.717 rmmod nvme_keyring 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2941147 ']' 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2941147 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 2941147 ']' 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 2941147 00:33:45.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2941147) - No such process 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 2941147 is not found' 00:33:45.717 Process with pid 2941147 is not found 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:45.717 04:50:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.627 04:50:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:47.627 00:33:47.627 real 0m34.737s 00:33:47.627 user 1m2.010s 00:33:47.627 sys 0m9.252s 00:33:47.627 04:50:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:47.627 04:50:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:47.627 ************************************ 00:33:47.627 END TEST nvmf_digest 00:33:47.627 ************************************ 00:33:47.886 04:50:07 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:47.886 04:50:07 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:47.886 04:50:07 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:47.886 04:50:07 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:47.886 04:50:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:47.886 04:50:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:47.886 04:50:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:47.886 ************************************ 00:33:47.886 START TEST nvmf_bdevperf 00:33:47.886 ************************************ 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:47.886 * Looking for test storage... 00:33:47.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:47.886 04:50:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:49.790 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:49.790 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:49.790 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:49.791 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:49.791 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:49.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:33:49.791 00:33:49.791 --- 10.0.0.2 ping statistics --- 00:33:49.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.791 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:33:49.791 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:33:50.050 00:33:50.050 --- 10.0.0.1 ping statistics --- 00:33:50.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.050 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:33:50.050 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.050 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:50.050 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:50.050 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.050 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:50.050 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:50.050 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.050 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:50.050 04:50:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2944858 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2944858 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2944858 ']' 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:50.050 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.050 [2024-07-14 04:50:10.063462] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:50.050 [2024-07-14 04:50:10.063554] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.050 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.050 [2024-07-14 04:50:10.133578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:50.050 [2024-07-14 04:50:10.224738] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.050 [2024-07-14 04:50:10.224789] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.050 [2024-07-14 04:50:10.224819] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.050 [2024-07-14 04:50:10.224831] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.050 [2024-07-14 04:50:10.224840] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.050 [2024-07-14 04:50:10.224910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:50.050 [2024-07-14 04:50:10.224977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.050 [2024-07-14 04:50:10.224973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.309 [2024-07-14 04:50:10.369397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.309 Malloc0 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:50.309 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.310 [2024-07-14 04:50:10.429984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:50.310 { 00:33:50.310 "params": { 00:33:50.310 "name": "Nvme$subsystem", 00:33:50.310 "trtype": "$TEST_TRANSPORT", 00:33:50.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.310 "adrfam": "ipv4", 00:33:50.310 "trsvcid": "$NVMF_PORT", 00:33:50.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.310 "hdgst": ${hdgst:-false}, 00:33:50.310 "ddgst": ${ddgst:-false} 00:33:50.310 }, 00:33:50.310 "method": "bdev_nvme_attach_controller" 00:33:50.310 } 00:33:50.310 EOF 00:33:50.310 )") 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:50.310 04:50:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:50.310 "params": { 00:33:50.310 "name": "Nvme1", 00:33:50.310 "trtype": "tcp", 00:33:50.310 "traddr": "10.0.0.2", 00:33:50.310 "adrfam": "ipv4", 00:33:50.310 "trsvcid": "4420", 00:33:50.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:50.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:50.310 "hdgst": false, 00:33:50.310 "ddgst": false 00:33:50.310 }, 00:33:50.310 "method": "bdev_nvme_attach_controller" 00:33:50.310 }' 00:33:50.310 [2024-07-14 04:50:10.476766] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:50.310 [2024-07-14 04:50:10.476838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944995 ] 00:33:50.570 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.570 [2024-07-14 04:50:10.540129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.570 [2024-07-14 04:50:10.637841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.830 Running I/O for 1 seconds... 00:33:51.768 00:33:51.768 Latency(us) 00:33:51.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.768 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.768 Verification LBA range: start 0x0 length 0x4000 00:33:51.768 Nvme1n1 : 1.01 8882.57 34.70 0.00 0.00 14345.90 1074.06 16214.09 00:33:51.768 =================================================================================================================== 00:33:51.768 Total : 8882.57 34.70 0.00 0.00 14345.90 1074.06 16214.09 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2945143 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:52.027 { 00:33:52.027 "params": { 00:33:52.027 "name": "Nvme$subsystem", 00:33:52.027 "trtype": "$TEST_TRANSPORT", 00:33:52.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.027 "adrfam": "ipv4", 00:33:52.027 "trsvcid": "$NVMF_PORT", 00:33:52.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.027 "hdgst": ${hdgst:-false}, 00:33:52.027 "ddgst": ${ddgst:-false} 00:33:52.027 }, 00:33:52.027 "method": "bdev_nvme_attach_controller" 00:33:52.027 } 00:33:52.027 EOF 00:33:52.027 )") 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:52.027 04:50:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:52.027 "params": { 00:33:52.027 "name": "Nvme1", 00:33:52.027 "trtype": "tcp", 00:33:52.027 "traddr": "10.0.0.2", 00:33:52.027 "adrfam": "ipv4", 00:33:52.027 "trsvcid": "4420", 00:33:52.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.027 "hdgst": false, 00:33:52.027 "ddgst": false 00:33:52.027 }, 00:33:52.027 "method": "bdev_nvme_attach_controller" 00:33:52.027 }' 00:33:52.027 [2024-07-14 04:50:12.122231] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:52.027 [2024-07-14 04:50:12.122316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945143 ] 00:33:52.027 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.027 [2024-07-14 04:50:12.181930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.284 [2024-07-14 04:50:12.266195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.284 Running I/O for 15 seconds... 00:33:55.574 04:50:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2944858 00:33:55.574 04:50:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:55.574 [2024-07-14 04:50:15.093807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.093875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.093928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.093947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.093964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.093980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.093997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.094013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.094037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.094053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.094069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.094085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.094101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.094117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.094133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.094166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.094185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.094202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.094220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.094236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.574 [2024-07-14 04:50:15.094254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.574 [2024-07-14 04:50:15.094270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.094975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.094990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.575 [2024-07-14 04:50:15.095699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.575 [2024-07-14 04:50:15.095716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.095734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.095750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.095767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.095788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.095805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.095822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.095839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.095855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.095879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.095912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.095930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.095944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.095960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.095975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.095990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.096978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.096994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.097008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.097024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.097039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.097054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.097069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.097085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.097105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.097122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.097136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.097152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.097183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.576 [2024-07-14 04:50:15.097202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.576 [2024-07-14 04:50:15.097218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.577 [2024-07-14 04:50:15.097801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.097836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.097877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.097939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.097971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.097988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.577 [2024-07-14 04:50:15.098320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c9150 is same with the state(5) to be set 00:33:55.577 [2024-07-14 04:50:15.098356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:55.577 [2024-07-14 04:50:15.098369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:55.577 [2024-07-14 04:50:15.098382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58504 len:8 PRP1 0x0 PRP2 0x0 00:33:55.577 [2024-07-14 04:50:15.098396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098464] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c9150 was disconnected and freed. reset controller. 00:33:55.577 [2024-07-14 04:50:15.098540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.577 [2024-07-14 04:50:15.098568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.577 [2024-07-14 04:50:15.098601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.577 [2024-07-14 04:50:15.098638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.577 [2024-07-14 04:50:15.098682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.577 [2024-07-14 04:50:15.098695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.577 [2024-07-14 04:50:15.102369] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.577 [2024-07-14 04:50:15.102411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.577 [2024-07-14 04:50:15.103285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.577 [2024-07-14 04:50:15.103319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.577 [2024-07-14 04:50:15.103338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.577 [2024-07-14 04:50:15.103580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.577 [2024-07-14 04:50:15.103826] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.577 [2024-07-14 04:50:15.103851] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.577 [2024-07-14 04:50:15.103880] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.577 [2024-07-14 04:50:15.107477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.577 [2024-07-14 04:50:15.116555] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.578 [2024-07-14 04:50:15.117008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.578 [2024-07-14 04:50:15.117041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.578 [2024-07-14 04:50:15.117060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.578 [2024-07-14 04:50:15.117300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.578 [2024-07-14 04:50:15.117545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.578 [2024-07-14 04:50:15.117570] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.578 [2024-07-14 04:50:15.117586] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.578 [2024-07-14 04:50:15.121192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.578 [2024-07-14 04:50:15.130504] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.578 [2024-07-14 04:50:15.130978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.578 [2024-07-14 04:50:15.131016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.578 [2024-07-14 04:50:15.131035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.578 [2024-07-14 04:50:15.131275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.578 [2024-07-14 04:50:15.131519] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.578 [2024-07-14 04:50:15.131543] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.578 [2024-07-14 04:50:15.131559] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.578 [2024-07-14 04:50:15.135163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.578 [2024-07-14 04:50:15.144507] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.578 [2024-07-14 04:50:15.145030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.578 [2024-07-14 04:50:15.145062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.578 [2024-07-14 04:50:15.145081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.578 [2024-07-14 04:50:15.145320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.578 [2024-07-14 04:50:15.145564] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.578 [2024-07-14 04:50:15.145593] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.578 [2024-07-14 04:50:15.145609] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.578 [2024-07-14 04:50:15.149223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.578 [2024-07-14 04:50:15.158546] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.578 [2024-07-14 04:50:15.159009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.578 [2024-07-14 04:50:15.159040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.578 [2024-07-14 04:50:15.159059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.578 [2024-07-14 04:50:15.159298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.578 [2024-07-14 04:50:15.159542] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.578 [2024-07-14 04:50:15.159566] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.578 [2024-07-14 04:50:15.159582] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.578 [2024-07-14 04:50:15.163174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.578 [2024-07-14 04:50:15.172485] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.578 [2024-07-14 04:50:15.172955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.578 [2024-07-14 04:50:15.172987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.578 [2024-07-14 04:50:15.173005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.578 [2024-07-14 04:50:15.173245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.578 [2024-07-14 04:50:15.173495] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.578 [2024-07-14 04:50:15.173519] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.578 [2024-07-14 04:50:15.173535] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.578 [2024-07-14 04:50:15.177129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.578 [2024-07-14 04:50:15.186441] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.578 [2024-07-14 04:50:15.186908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.578 [2024-07-14 04:50:15.186940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.578 [2024-07-14 04:50:15.186958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.578 [2024-07-14 04:50:15.187198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.578 [2024-07-14 04:50:15.187441] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.578 [2024-07-14 04:50:15.187466] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.578 [2024-07-14 04:50:15.187482] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.578 [2024-07-14 04:50:15.191073] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.578 [2024-07-14 04:50:15.200381] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.578 [2024-07-14 04:50:15.200848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.578 [2024-07-14 04:50:15.200887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.578 [2024-07-14 04:50:15.200907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.578 [2024-07-14 04:50:15.201146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.578 [2024-07-14 04:50:15.201389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.578 [2024-07-14 04:50:15.201413] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.578 [2024-07-14 04:50:15.201429] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.578 [2024-07-14 04:50:15.205015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.578 [2024-07-14 04:50:15.214334] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.578 [2024-07-14 04:50:15.214780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.578 [2024-07-14 04:50:15.214812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.578 [2024-07-14 04:50:15.214830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.578 [2024-07-14 04:50:15.215082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.578 [2024-07-14 04:50:15.215326] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.578 [2024-07-14 04:50:15.215350] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.578 [2024-07-14 04:50:15.215366] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.578 [2024-07-14 04:50:15.218976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.578 [2024-07-14 04:50:15.228294] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.578 [2024-07-14 04:50:15.228768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.578 [2024-07-14 04:50:15.228810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.578 [2024-07-14 04:50:15.228827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.578 [2024-07-14 04:50:15.229083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.578 [2024-07-14 04:50:15.229328] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.578 [2024-07-14 04:50:15.229353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.578 [2024-07-14 04:50:15.229368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.232966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.242281] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.242756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.242788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.242805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.243054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.243299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.243324] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.243340] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.246928] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.256233] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.256800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.256853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.256880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.257121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.257364] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.257389] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.257405] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.261000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.270105] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.270672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.270723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.270746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.270997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.271241] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.271266] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.271282] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.274874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.283981] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.284446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.284476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.284494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.284734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.284991] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.285016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.285032] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.288613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.297929] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.298378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.298420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.298436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.298686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.298944] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.298969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.298985] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.302569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.311901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.312422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.312464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.312481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.312743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.312999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.313031] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.313048] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.316630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.325979] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.326463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.326494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.326512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.326751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.327007] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.327031] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.327047] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.330633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.339952] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.340415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.340447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.340465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.340704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.340960] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.340985] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.341001] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.344582] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.353913] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.354357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.354388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.354406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.354645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.354899] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.354924] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.354940] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.358521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.367828] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.368285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.368316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.368334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.368573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.368817] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.579 [2024-07-14 04:50:15.368841] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.579 [2024-07-14 04:50:15.368856] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.579 [2024-07-14 04:50:15.372446] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.579 [2024-07-14 04:50:15.381751] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.579 [2024-07-14 04:50:15.382202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.579 [2024-07-14 04:50:15.382233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.579 [2024-07-14 04:50:15.382251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.579 [2024-07-14 04:50:15.382490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.579 [2024-07-14 04:50:15.382734] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.382758] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.382774] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.386358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.395693] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.396146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.396177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.396195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.396434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.396677] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.396701] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.396717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.400309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.409608] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.410069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.410101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.410119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.410363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.410607] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.410631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.410646] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.414237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.423545] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.424027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.424058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.424076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.424316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.424560] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.424584] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.424600] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.428191] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.437500] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.437941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.437973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.437991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.438230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.438474] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.438498] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.438514] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.442108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.451414] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.451915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.451944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.451961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.452199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.452458] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.452482] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.452503] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.456096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.465399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.465872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.465904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.465922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.466161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.466404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.466428] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.466444] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.470035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.479339] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.479819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.479861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.479889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.480142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.480386] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.480410] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.480426] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.484016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.493317] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.493798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.493824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.493854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.494130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.494374] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.494398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.494414] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.498006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.507312] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.507768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.507799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.507817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.508067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.508312] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.508336] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.508352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.511946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.580 [2024-07-14 04:50:15.521280] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.580 [2024-07-14 04:50:15.521760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.580 [2024-07-14 04:50:15.521801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.580 [2024-07-14 04:50:15.521818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.580 [2024-07-14 04:50:15.522081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.580 [2024-07-14 04:50:15.522325] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.580 [2024-07-14 04:50:15.522349] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.580 [2024-07-14 04:50:15.522365] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.580 [2024-07-14 04:50:15.525955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.535272] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.535744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.535776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.535793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.536043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.536289] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.536313] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.536328] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.539922] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.549229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.549698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.549730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.549747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.550004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.550249] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.550274] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.550290] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.553880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.563191] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.563819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.563886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.563906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.564146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.564389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.564413] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.564430] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.568026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.577125] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.577648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.577691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.577707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.577982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.578226] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.578250] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.578266] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.581853] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.591165] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.591679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.591710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.591728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.591978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.592223] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.592247] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.592271] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.595852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.605175] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.605623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.605654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.605672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.605921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.606166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.606190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.606205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.609787] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.619103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.619576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.619608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.619626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.619875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.620120] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.620144] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.620160] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.623741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.633057] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.633536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.633577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.633594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.633843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.634097] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.634121] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.634138] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.637720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.647033] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.647478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.647523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.647539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.647805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.648061] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.648085] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.648101] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.651683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.660995] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.661432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.661464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.661481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.661720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.661976] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.662001] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.662017] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.581 [2024-07-14 04:50:15.665600] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.581 [2024-07-14 04:50:15.674906] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.581 [2024-07-14 04:50:15.675370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.581 [2024-07-14 04:50:15.675401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.581 [2024-07-14 04:50:15.675419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.581 [2024-07-14 04:50:15.675658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.581 [2024-07-14 04:50:15.675913] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.581 [2024-07-14 04:50:15.675939] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.581 [2024-07-14 04:50:15.675955] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.582 [2024-07-14 04:50:15.679535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.582 [2024-07-14 04:50:15.688858] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.582 [2024-07-14 04:50:15.689328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.582 [2024-07-14 04:50:15.689359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.582 [2024-07-14 04:50:15.689379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.582 [2024-07-14 04:50:15.689618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.582 [2024-07-14 04:50:15.689878] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.582 [2024-07-14 04:50:15.689912] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.582 [2024-07-14 04:50:15.689928] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.582 [2024-07-14 04:50:15.693518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.582 [2024-07-14 04:50:15.702847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.582 [2024-07-14 04:50:15.703344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.582 [2024-07-14 04:50:15.703385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.582 [2024-07-14 04:50:15.703402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.582 [2024-07-14 04:50:15.703660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.582 [2024-07-14 04:50:15.703914] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.582 [2024-07-14 04:50:15.703939] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.582 [2024-07-14 04:50:15.703955] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.582 [2024-07-14 04:50:15.707539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.582 [2024-07-14 04:50:15.716729] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.582 [2024-07-14 04:50:15.717203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.582 [2024-07-14 04:50:15.717235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.582 [2024-07-14 04:50:15.717254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.582 [2024-07-14 04:50:15.717493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.582 [2024-07-14 04:50:15.717737] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.582 [2024-07-14 04:50:15.717761] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.582 [2024-07-14 04:50:15.717777] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.582 [2024-07-14 04:50:15.721371] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.582 [2024-07-14 04:50:15.730694] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.582 [2024-07-14 04:50:15.731145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.582 [2024-07-14 04:50:15.731188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.582 [2024-07-14 04:50:15.731204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.582 [2024-07-14 04:50:15.731446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.582 [2024-07-14 04:50:15.731690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.582 [2024-07-14 04:50:15.731714] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.582 [2024-07-14 04:50:15.731730] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.582 [2024-07-14 04:50:15.735336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.582 [2024-07-14 04:50:15.744638] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.582 [2024-07-14 04:50:15.745118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.582 [2024-07-14 04:50:15.745150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.582 [2024-07-14 04:50:15.745168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.582 [2024-07-14 04:50:15.745407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.582 [2024-07-14 04:50:15.745650] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.582 [2024-07-14 04:50:15.745675] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.582 [2024-07-14 04:50:15.745690] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.582 [2024-07-14 04:50:15.749283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.582 [2024-07-14 04:50:15.758803] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.582 [2024-07-14 04:50:15.759268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.842 [2024-07-14 04:50:15.759301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.842 [2024-07-14 04:50:15.759320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.759562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.759806] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.759831] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.759847] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.763546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.772856] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.773334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.773366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.773384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.773623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.773877] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.773902] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.773918] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.777613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.786717] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.787167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.787199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.787225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.787466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.787709] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.787734] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.787749] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.791343] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.800649] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.801130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.801171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.801187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.801438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.801683] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.801707] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.801723] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.805315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.814624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.815092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.815124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.815142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.815380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.815624] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.815648] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.815664] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.819257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.828563] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.829014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.829046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.829064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.829303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.829547] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.829576] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.829592] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.833190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.842502] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.843011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.843061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.843079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.843318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.843562] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.843586] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.843602] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.847193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.856512] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.856989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.857020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.857038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.857277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.857522] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.857546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.857562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.861148] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.870464] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.870993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.871025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.871043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.871282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.871526] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.871551] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.871567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.875160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.884478] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.884998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.885029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.885047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.885285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.885529] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.885553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.885569] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.889162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.898466] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.899002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.843 [2024-07-14 04:50:15.899034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.843 [2024-07-14 04:50:15.899052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.843 [2024-07-14 04:50:15.899291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.843 [2024-07-14 04:50:15.899534] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.843 [2024-07-14 04:50:15.899559] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.843 [2024-07-14 04:50:15.899574] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.843 [2024-07-14 04:50:15.903182] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.843 [2024-07-14 04:50:15.912487] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.843 [2024-07-14 04:50:15.912948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.844 [2024-07-14 04:50:15.912975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.844 [2024-07-14 04:50:15.912990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.844 [2024-07-14 04:50:15.913256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.844 [2024-07-14 04:50:15.913500] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.844 [2024-07-14 04:50:15.913525] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.844 [2024-07-14 04:50:15.913540] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.844 [2024-07-14 04:50:15.917135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.844 [2024-07-14 04:50:15.926447] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.844 [2024-07-14 04:50:15.926943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.844 [2024-07-14 04:50:15.926972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.844 [2024-07-14 04:50:15.926988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.844 [2024-07-14 04:50:15.927252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.844 [2024-07-14 04:50:15.927496] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.844 [2024-07-14 04:50:15.927521] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.844 [2024-07-14 04:50:15.927536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.844 [2024-07-14 04:50:15.931131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.844 [2024-07-14 04:50:15.940446] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.844 [2024-07-14 04:50:15.940907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.844 [2024-07-14 04:50:15.940938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.844 [2024-07-14 04:50:15.940956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.844 [2024-07-14 04:50:15.941195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.844 [2024-07-14 04:50:15.941438] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.844 [2024-07-14 04:50:15.941462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.844 [2024-07-14 04:50:15.941479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.844 [2024-07-14 04:50:15.945074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.844 [2024-07-14 04:50:15.954386] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.844 [2024-07-14 04:50:15.954902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.844 [2024-07-14 04:50:15.954934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.844 [2024-07-14 04:50:15.954952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.844 [2024-07-14 04:50:15.955191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.844 [2024-07-14 04:50:15.955434] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.844 [2024-07-14 04:50:15.955458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.844 [2024-07-14 04:50:15.955473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.844 [2024-07-14 04:50:15.959064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.844 [2024-07-14 04:50:15.968383] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.844 [2024-07-14 04:50:15.968829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.844 [2024-07-14 04:50:15.968860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.844 [2024-07-14 04:50:15.968890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.844 [2024-07-14 04:50:15.969130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.844 [2024-07-14 04:50:15.969374] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.844 [2024-07-14 04:50:15.969398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.844 [2024-07-14 04:50:15.969419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.844 [2024-07-14 04:50:15.973014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.844 [2024-07-14 04:50:15.982332] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.844 [2024-07-14 04:50:15.982791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.844 [2024-07-14 04:50:15.982823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.844 [2024-07-14 04:50:15.982842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.844 [2024-07-14 04:50:15.983090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.844 [2024-07-14 04:50:15.983335] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.844 [2024-07-14 04:50:15.983359] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.844 [2024-07-14 04:50:15.983375] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.844 [2024-07-14 04:50:15.986983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.844 [2024-07-14 04:50:15.996310] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.844 [2024-07-14 04:50:15.996929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.844 [2024-07-14 04:50:15.996961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.844 [2024-07-14 04:50:15.996979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.844 [2024-07-14 04:50:15.997218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.844 [2024-07-14 04:50:15.997463] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.844 [2024-07-14 04:50:15.997487] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.844 [2024-07-14 04:50:15.997503] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.844 [2024-07-14 04:50:16.001101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.844 [2024-07-14 04:50:16.010251] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.844 [2024-07-14 04:50:16.010723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.844 [2024-07-14 04:50:16.010754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.844 [2024-07-14 04:50:16.010772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.844 [2024-07-14 04:50:16.011021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.844 [2024-07-14 04:50:16.011265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.844 [2024-07-14 04:50:16.011290] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.844 [2024-07-14 04:50:16.011306] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.844 [2024-07-14 04:50:16.014905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.844 [2024-07-14 04:50:16.024236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.844 [2024-07-14 04:50:16.024751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.844 [2024-07-14 04:50:16.024791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:55.844 [2024-07-14 04:50:16.024807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:55.844 [2024-07-14 04:50:16.025065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:55.844 [2024-07-14 04:50:16.025309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.844 [2024-07-14 04:50:16.025334] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.844 [2024-07-14 04:50:16.025350] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.844 [2024-07-14 04:50:16.029008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.038287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.107 [2024-07-14 04:50:16.038937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.107 [2024-07-14 04:50:16.038970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.107 [2024-07-14 04:50:16.038990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.107 [2024-07-14 04:50:16.039249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.107 [2024-07-14 04:50:16.039494] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.107 [2024-07-14 04:50:16.039518] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.107 [2024-07-14 04:50:16.039534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.107 [2024-07-14 04:50:16.043126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.052240] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.107 [2024-07-14 04:50:16.052902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.107 [2024-07-14 04:50:16.052966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.107 [2024-07-14 04:50:16.052984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.107 [2024-07-14 04:50:16.053223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.107 [2024-07-14 04:50:16.053467] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.107 [2024-07-14 04:50:16.053491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.107 [2024-07-14 04:50:16.053507] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.107 [2024-07-14 04:50:16.057119] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.066223] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.107 [2024-07-14 04:50:16.066708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.107 [2024-07-14 04:50:16.066750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.107 [2024-07-14 04:50:16.066767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.107 [2024-07-14 04:50:16.067041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.107 [2024-07-14 04:50:16.067286] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.107 [2024-07-14 04:50:16.067311] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.107 [2024-07-14 04:50:16.067327] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.107 [2024-07-14 04:50:16.070920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.080238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.107 [2024-07-14 04:50:16.080766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.107 [2024-07-14 04:50:16.080808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.107 [2024-07-14 04:50:16.080825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.107 [2024-07-14 04:50:16.081084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.107 [2024-07-14 04:50:16.081328] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.107 [2024-07-14 04:50:16.081352] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.107 [2024-07-14 04:50:16.081374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.107 [2024-07-14 04:50:16.084962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.094287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.107 [2024-07-14 04:50:16.094753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.107 [2024-07-14 04:50:16.094785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.107 [2024-07-14 04:50:16.094803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.107 [2024-07-14 04:50:16.095051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.107 [2024-07-14 04:50:16.095296] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.107 [2024-07-14 04:50:16.095320] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.107 [2024-07-14 04:50:16.095335] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.107 [2024-07-14 04:50:16.098926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.108249] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.107 [2024-07-14 04:50:16.108721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.107 [2024-07-14 04:50:16.108752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.107 [2024-07-14 04:50:16.108770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.107 [2024-07-14 04:50:16.109020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.107 [2024-07-14 04:50:16.109265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.107 [2024-07-14 04:50:16.109297] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.107 [2024-07-14 04:50:16.109318] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.107 [2024-07-14 04:50:16.112919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.122250] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.107 [2024-07-14 04:50:16.122778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.107 [2024-07-14 04:50:16.122806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.107 [2024-07-14 04:50:16.122822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.107 [2024-07-14 04:50:16.123083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.107 [2024-07-14 04:50:16.123329] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.107 [2024-07-14 04:50:16.123353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.107 [2024-07-14 04:50:16.123368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.107 [2024-07-14 04:50:16.127080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.136146] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.107 [2024-07-14 04:50:16.136595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.107 [2024-07-14 04:50:16.136625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.107 [2024-07-14 04:50:16.136642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.107 [2024-07-14 04:50:16.136882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.107 [2024-07-14 04:50:16.137118] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.107 [2024-07-14 04:50:16.137151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.107 [2024-07-14 04:50:16.137166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.107 [2024-07-14 04:50:16.140557] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.149594] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.107 [2024-07-14 04:50:16.150020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.107 [2024-07-14 04:50:16.150048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.107 [2024-07-14 04:50:16.150064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.107 [2024-07-14 04:50:16.150304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.107 [2024-07-14 04:50:16.150510] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.107 [2024-07-14 04:50:16.150531] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.107 [2024-07-14 04:50:16.150544] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.107 [2024-07-14 04:50:16.153562] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.107 [2024-07-14 04:50:16.162855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.163317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.163349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.163380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.163631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.163832] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.163851] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.163873] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.166778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.175956] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.176366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.176395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.176411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.176650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.176874] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.176895] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.176908] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.179834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.189140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.189624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.189666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.189683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.189932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.190138] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.190159] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.190172] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.193058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.202358] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.202812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.202854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.202880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.203113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.203351] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.203371] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.203384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.206253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.215552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.215976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.216004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.216035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.216290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.216490] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.216509] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.216522] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.219434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.228629] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.229081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.229109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.229126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.229379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.229580] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.229599] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.229612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.232524] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.241760] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.242213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.242241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.242258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.242512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.242713] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.242733] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.242746] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.245621] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.255012] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.255423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.255450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.255466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.255686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.255913] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.255934] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.255948] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.258925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.268252] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.268710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.268737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.268768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.269027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.269227] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.269247] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.269260] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.272172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.281401] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.281917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.281945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.281961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.282215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.282415] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.282434] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.282448] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.108 [2024-07-14 04:50:16.285383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.108 [2024-07-14 04:50:16.294834] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.108 [2024-07-14 04:50:16.295351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.108 [2024-07-14 04:50:16.295380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.108 [2024-07-14 04:50:16.295406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.108 [2024-07-14 04:50:16.295660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.108 [2024-07-14 04:50:16.295886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.108 [2024-07-14 04:50:16.295907] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.108 [2024-07-14 04:50:16.295927] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.371 [2024-07-14 04:50:16.298976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.371 [2024-07-14 04:50:16.307957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.371 [2024-07-14 04:50:16.308431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.371 [2024-07-14 04:50:16.308459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.371 [2024-07-14 04:50:16.308476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.371 [2024-07-14 04:50:16.308730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.371 [2024-07-14 04:50:16.308939] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.371 [2024-07-14 04:50:16.308960] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.371 [2024-07-14 04:50:16.308974] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.371 [2024-07-14 04:50:16.311871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.371 [2024-07-14 04:50:16.321074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.371 [2024-07-14 04:50:16.321577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.371 [2024-07-14 04:50:16.321606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.371 [2024-07-14 04:50:16.321622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.371 [2024-07-14 04:50:16.321885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.371 [2024-07-14 04:50:16.322092] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.371 [2024-07-14 04:50:16.322112] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.371 [2024-07-14 04:50:16.322125] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.371 [2024-07-14 04:50:16.325033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.371 [2024-07-14 04:50:16.334211] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.371 [2024-07-14 04:50:16.334720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.371 [2024-07-14 04:50:16.334749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.371 [2024-07-14 04:50:16.334765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.371 [2024-07-14 04:50:16.335029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.371 [2024-07-14 04:50:16.335229] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.371 [2024-07-14 04:50:16.335254] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.371 [2024-07-14 04:50:16.335267] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.371 [2024-07-14 04:50:16.338177] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.371 [2024-07-14 04:50:16.347375] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.371 [2024-07-14 04:50:16.347847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.371 [2024-07-14 04:50:16.347881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.371 [2024-07-14 04:50:16.347899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.371 [2024-07-14 04:50:16.348141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.371 [2024-07-14 04:50:16.348356] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.371 [2024-07-14 04:50:16.348377] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.371 [2024-07-14 04:50:16.348390] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.371 [2024-07-14 04:50:16.351257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.371 [2024-07-14 04:50:16.360762] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.371 [2024-07-14 04:50:16.361231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.371 [2024-07-14 04:50:16.361274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.371 [2024-07-14 04:50:16.361290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.371 [2024-07-14 04:50:16.361545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.371 [2024-07-14 04:50:16.361745] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.371 [2024-07-14 04:50:16.361765] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.371 [2024-07-14 04:50:16.361778] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.371 [2024-07-14 04:50:16.364836] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.371 [2024-07-14 04:50:16.374032] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.371 [2024-07-14 04:50:16.374460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.371 [2024-07-14 04:50:16.374487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.371 [2024-07-14 04:50:16.374517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.371 [2024-07-14 04:50:16.374751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.371 [2024-07-14 04:50:16.375000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.371 [2024-07-14 04:50:16.375022] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.371 [2024-07-14 04:50:16.375035] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.371 [2024-07-14 04:50:16.377954] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.371 [2024-07-14 04:50:16.387279] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.371 [2024-07-14 04:50:16.387738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.371 [2024-07-14 04:50:16.387782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.371 [2024-07-14 04:50:16.387799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.371 [2024-07-14 04:50:16.388070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.371 [2024-07-14 04:50:16.388309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.371 [2024-07-14 04:50:16.388329] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.371 [2024-07-14 04:50:16.388341] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.371 [2024-07-14 04:50:16.391212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.371 [2024-07-14 04:50:16.400433] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.371 [2024-07-14 04:50:16.400924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.371 [2024-07-14 04:50:16.400967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.371 [2024-07-14 04:50:16.400984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.371 [2024-07-14 04:50:16.401239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.371 [2024-07-14 04:50:16.401440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.371 [2024-07-14 04:50:16.401459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.371 [2024-07-14 04:50:16.401472] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.371 [2024-07-14 04:50:16.404384] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.371 [2024-07-14 04:50:16.413560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.371 [2024-07-14 04:50:16.414050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.414079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.414095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.414350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.414551] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.414571] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.414583] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.417455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.426678] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.427162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.427205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.427221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.427465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.427665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.427685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.427698] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.430607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.439745] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.440207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.440249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.440266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.440522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.440721] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.440741] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.440754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.443666] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.452837] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.453330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.453371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.453388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.453644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.453845] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.453872] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.453903] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.456794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.465977] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.466404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.466446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.466463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.466714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.466940] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.466962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.466980] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.469871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.479090] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.479548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.479589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.479607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.479859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.480088] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.480108] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.480122] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.483010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.492336] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.492808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.492850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.492874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.493119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.493336] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.493356] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.493369] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.496236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.505536] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.506012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.506040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.506072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.506327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.506527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.506546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.506559] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.509471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.518648] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.519101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.519129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.519146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.519398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.519598] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.519617] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.519630] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.522546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.531843] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.532352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.532380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.532396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.532649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.532849] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.532890] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.532906] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.535797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.545004] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.372 [2024-07-14 04:50:16.545483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.372 [2024-07-14 04:50:16.545524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.372 [2024-07-14 04:50:16.545541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.372 [2024-07-14 04:50:16.545793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.372 [2024-07-14 04:50:16.546022] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.372 [2024-07-14 04:50:16.546044] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.372 [2024-07-14 04:50:16.546057] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.372 [2024-07-14 04:50:16.548985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.372 [2024-07-14 04:50:16.558422] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.373 [2024-07-14 04:50:16.558918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.373 [2024-07-14 04:50:16.558947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.373 [2024-07-14 04:50:16.558964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.373 [2024-07-14 04:50:16.559218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.373 [2024-07-14 04:50:16.559423] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.373 [2024-07-14 04:50:16.559443] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.373 [2024-07-14 04:50:16.559456] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.635 [2024-07-14 04:50:16.562499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.635 [2024-07-14 04:50:16.571554] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.635 [2024-07-14 04:50:16.571998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.635 [2024-07-14 04:50:16.572027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.635 [2024-07-14 04:50:16.572044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.635 [2024-07-14 04:50:16.572297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.635 [2024-07-14 04:50:16.572498] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.635 [2024-07-14 04:50:16.572518] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.635 [2024-07-14 04:50:16.572530] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.635 [2024-07-14 04:50:16.575441] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.635 [2024-07-14 04:50:16.584780] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.635 [2024-07-14 04:50:16.585307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.635 [2024-07-14 04:50:16.585350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.635 [2024-07-14 04:50:16.585366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.635 [2024-07-14 04:50:16.585617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.635 [2024-07-14 04:50:16.585817] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.635 [2024-07-14 04:50:16.585837] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.635 [2024-07-14 04:50:16.585849] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.635 [2024-07-14 04:50:16.588760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.635 [2024-07-14 04:50:16.597903] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.635 [2024-07-14 04:50:16.598390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.635 [2024-07-14 04:50:16.598432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.635 [2024-07-14 04:50:16.598449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.635 [2024-07-14 04:50:16.598701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.635 [2024-07-14 04:50:16.598927] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.635 [2024-07-14 04:50:16.598948] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.598961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.636 [2024-07-14 04:50:16.601851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.636 [2024-07-14 04:50:16.611363] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.636 [2024-07-14 04:50:16.611786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.636 [2024-07-14 04:50:16.611813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.636 [2024-07-14 04:50:16.611845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.636 [2024-07-14 04:50:16.612067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.636 [2024-07-14 04:50:16.612306] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.636 [2024-07-14 04:50:16.612326] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.612339] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.636 [2024-07-14 04:50:16.615326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.636 [2024-07-14 04:50:16.624689] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.636 [2024-07-14 04:50:16.625198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.636 [2024-07-14 04:50:16.625226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.636 [2024-07-14 04:50:16.625242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.636 [2024-07-14 04:50:16.625480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.636 [2024-07-14 04:50:16.625679] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.636 [2024-07-14 04:50:16.625699] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.625712] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.636 [2024-07-14 04:50:16.628741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.636 [2024-07-14 04:50:16.637914] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.636 [2024-07-14 04:50:16.638358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.636 [2024-07-14 04:50:16.638386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.636 [2024-07-14 04:50:16.638403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.636 [2024-07-14 04:50:16.638656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.636 [2024-07-14 04:50:16.638856] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.636 [2024-07-14 04:50:16.638898] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.638912] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.636 [2024-07-14 04:50:16.641802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.636 [2024-07-14 04:50:16.651108] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.636 [2024-07-14 04:50:16.651615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.636 [2024-07-14 04:50:16.651644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.636 [2024-07-14 04:50:16.651665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.636 [2024-07-14 04:50:16.651946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.636 [2024-07-14 04:50:16.652185] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.636 [2024-07-14 04:50:16.652206] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.652220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.636 [2024-07-14 04:50:16.655106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.636 [2024-07-14 04:50:16.664282] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.636 [2024-07-14 04:50:16.664789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.636 [2024-07-14 04:50:16.664818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.636 [2024-07-14 04:50:16.664834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.636 [2024-07-14 04:50:16.665084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.636 [2024-07-14 04:50:16.665301] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.636 [2024-07-14 04:50:16.665321] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.665334] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.636 [2024-07-14 04:50:16.668386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.636 [2024-07-14 04:50:16.677421] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.636 [2024-07-14 04:50:16.677871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.636 [2024-07-14 04:50:16.677899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.636 [2024-07-14 04:50:16.677916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.636 [2024-07-14 04:50:16.678169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.636 [2024-07-14 04:50:16.678369] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.636 [2024-07-14 04:50:16.678389] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.678402] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.636 [2024-07-14 04:50:16.681408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.636 [2024-07-14 04:50:16.690617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.636 [2024-07-14 04:50:16.691121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.636 [2024-07-14 04:50:16.691150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.636 [2024-07-14 04:50:16.691166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.636 [2024-07-14 04:50:16.691421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.636 [2024-07-14 04:50:16.691626] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.636 [2024-07-14 04:50:16.691646] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.691658] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.636 [2024-07-14 04:50:16.694572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.636 [2024-07-14 04:50:16.703778] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.636 [2024-07-14 04:50:16.704218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.636 [2024-07-14 04:50:16.704247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.636 [2024-07-14 04:50:16.704263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.636 [2024-07-14 04:50:16.704504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.636 [2024-07-14 04:50:16.704721] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.636 [2024-07-14 04:50:16.704741] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.704754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.636 [2024-07-14 04:50:16.707629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.636 [2024-07-14 04:50:16.717036] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.636 [2024-07-14 04:50:16.717492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.636 [2024-07-14 04:50:16.717534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.636 [2024-07-14 04:50:16.717550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.636 [2024-07-14 04:50:16.717806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.636 [2024-07-14 04:50:16.718035] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.636 [2024-07-14 04:50:16.718056] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.636 [2024-07-14 04:50:16.718070] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.637 [2024-07-14 04:50:16.720998] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.637 [2024-07-14 04:50:16.730261] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.637 [2024-07-14 04:50:16.730746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.637 [2024-07-14 04:50:16.730788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.637 [2024-07-14 04:50:16.730805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.637 [2024-07-14 04:50:16.731054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.637 [2024-07-14 04:50:16.731273] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.637 [2024-07-14 04:50:16.731293] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.637 [2024-07-14 04:50:16.731306] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.637 [2024-07-14 04:50:16.734263] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.637 [2024-07-14 04:50:16.743458] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.637 [2024-07-14 04:50:16.743910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.637 [2024-07-14 04:50:16.743953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.637 [2024-07-14 04:50:16.743969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.637 [2024-07-14 04:50:16.744222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.637 [2024-07-14 04:50:16.744423] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.637 [2024-07-14 04:50:16.744442] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.637 [2024-07-14 04:50:16.744455] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.637 [2024-07-14 04:50:16.747369] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.637 [2024-07-14 04:50:16.756716] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.637 [2024-07-14 04:50:16.757175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.637 [2024-07-14 04:50:16.757203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.637 [2024-07-14 04:50:16.757220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.637 [2024-07-14 04:50:16.757474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.637 [2024-07-14 04:50:16.757674] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.637 [2024-07-14 04:50:16.757694] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.637 [2024-07-14 04:50:16.757707] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.637 [2024-07-14 04:50:16.760659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.637 [2024-07-14 04:50:16.769837] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.637 [2024-07-14 04:50:16.770293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.637 [2024-07-14 04:50:16.770335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.637 [2024-07-14 04:50:16.770352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.637 [2024-07-14 04:50:16.770605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.637 [2024-07-14 04:50:16.770805] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.637 [2024-07-14 04:50:16.770824] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.637 [2024-07-14 04:50:16.770837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.637 [2024-07-14 04:50:16.773751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.637 [2024-07-14 04:50:16.783062] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.637 [2024-07-14 04:50:16.783549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.637 [2024-07-14 04:50:16.783591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.637 [2024-07-14 04:50:16.783613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.637 [2024-07-14 04:50:16.783874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.637 [2024-07-14 04:50:16.784097] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.637 [2024-07-14 04:50:16.784117] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.637 [2024-07-14 04:50:16.784130] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.637 [2024-07-14 04:50:16.787020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.637 [2024-07-14 04:50:16.796161] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.637 [2024-07-14 04:50:16.796612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.637 [2024-07-14 04:50:16.796654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.637 [2024-07-14 04:50:16.796671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.637 [2024-07-14 04:50:16.796932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.637 [2024-07-14 04:50:16.797154] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.637 [2024-07-14 04:50:16.797174] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.637 [2024-07-14 04:50:16.797187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.637 [2024-07-14 04:50:16.800073] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.637 [2024-07-14 04:50:16.809376] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.637 [2024-07-14 04:50:16.809818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.637 [2024-07-14 04:50:16.809846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.637 [2024-07-14 04:50:16.809862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.637 [2024-07-14 04:50:16.810114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.637 [2024-07-14 04:50:16.810332] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.637 [2024-07-14 04:50:16.810352] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.637 [2024-07-14 04:50:16.810365] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.637 [2024-07-14 04:50:16.813237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.637 [2024-07-14 04:50:16.822619] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.637 [2024-07-14 04:50:16.823072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.637 [2024-07-14 04:50:16.823111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.637 [2024-07-14 04:50:16.823132] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.637 [2024-07-14 04:50:16.823385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.637 [2024-07-14 04:50:16.823584] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.637 [2024-07-14 04:50:16.823609] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.637 [2024-07-14 04:50:16.823623] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.899 [2024-07-14 04:50:16.826688] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.899 [2024-07-14 04:50:16.835720] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.899 [2024-07-14 04:50:16.836219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.899 [2024-07-14 04:50:16.836261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.899 [2024-07-14 04:50:16.836278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.899 [2024-07-14 04:50:16.836515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.899 [2024-07-14 04:50:16.836715] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.899 [2024-07-14 04:50:16.836734] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.899 [2024-07-14 04:50:16.836747] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.899 [2024-07-14 04:50:16.839661] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.899 [2024-07-14 04:50:16.848844] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.899 [2024-07-14 04:50:16.849300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.899 [2024-07-14 04:50:16.849341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.899 [2024-07-14 04:50:16.849358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.899 [2024-07-14 04:50:16.849593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.899 [2024-07-14 04:50:16.849794] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.899 [2024-07-14 04:50:16.849814] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.899 [2024-07-14 04:50:16.849827] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.899 [2024-07-14 04:50:16.852801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.899 [2024-07-14 04:50:16.862367] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.899 [2024-07-14 04:50:16.862793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.899 [2024-07-14 04:50:16.862822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.899 [2024-07-14 04:50:16.862838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.899 [2024-07-14 04:50:16.863073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.863313] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.863333] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.863345] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.866359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.875675] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.876164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.876206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.876223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.900 [2024-07-14 04:50:16.876476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.876676] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.876696] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.876708] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.879621] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.888762] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.889275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.889303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.889318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.900 [2024-07-14 04:50:16.889572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.889771] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.889791] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.889804] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.892743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.901890] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.902396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.902425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.902441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.900 [2024-07-14 04:50:16.902696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.902923] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.902944] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.902957] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.905844] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.914987] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.915493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.915521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.915537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.900 [2024-07-14 04:50:16.915795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.916024] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.916045] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.916059] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.918948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.928131] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.928584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.928626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.928642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.900 [2024-07-14 04:50:16.928903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.929104] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.929123] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.929136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.932045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.941398] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.941905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.941933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.941949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.900 [2024-07-14 04:50:16.942202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.942402] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.942422] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.942435] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.945349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.954487] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.954909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.954937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.954969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.900 [2024-07-14 04:50:16.955223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.955422] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.955442] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.955463] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.958378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.967595] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.968065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.968093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.968124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.900 [2024-07-14 04:50:16.968380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.968579] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.968599] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.968612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.971521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.980826] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.981282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.981309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.981339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.900 [2024-07-14 04:50:16.981592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.900 [2024-07-14 04:50:16.981792] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.900 [2024-07-14 04:50:16.981812] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.900 [2024-07-14 04:50:16.981824] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.900 [2024-07-14 04:50:16.984739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.900 [2024-07-14 04:50:16.993950] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.900 [2024-07-14 04:50:16.994380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.900 [2024-07-14 04:50:16.994406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.900 [2024-07-14 04:50:16.994437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.901 [2024-07-14 04:50:16.994692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.901 [2024-07-14 04:50:16.994917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.901 [2024-07-14 04:50:16.994938] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.901 [2024-07-14 04:50:16.994952] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.901 [2024-07-14 04:50:16.997838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.901 [2024-07-14 04:50:17.007147] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.901 [2024-07-14 04:50:17.007604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.901 [2024-07-14 04:50:17.007651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.901 [2024-07-14 04:50:17.007669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.901 [2024-07-14 04:50:17.007923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.901 [2024-07-14 04:50:17.008129] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.901 [2024-07-14 04:50:17.008149] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.901 [2024-07-14 04:50:17.008163] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.901 [2024-07-14 04:50:17.011049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.901 [2024-07-14 04:50:17.020387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.901 [2024-07-14 04:50:17.020830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.901 [2024-07-14 04:50:17.020856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.901 [2024-07-14 04:50:17.020896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.901 [2024-07-14 04:50:17.021154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.901 [2024-07-14 04:50:17.021371] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.901 [2024-07-14 04:50:17.021391] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.901 [2024-07-14 04:50:17.021404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.901 [2024-07-14 04:50:17.024274] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.901 [2024-07-14 04:50:17.033579] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.901 [2024-07-14 04:50:17.034002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.901 [2024-07-14 04:50:17.034030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.901 [2024-07-14 04:50:17.034062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.901 [2024-07-14 04:50:17.034317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.901 [2024-07-14 04:50:17.034517] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.901 [2024-07-14 04:50:17.034537] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.901 [2024-07-14 04:50:17.034550] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.901 [2024-07-14 04:50:17.037466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.901 [2024-07-14 04:50:17.046764] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.901 [2024-07-14 04:50:17.047196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.901 [2024-07-14 04:50:17.047223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.901 [2024-07-14 04:50:17.047255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.901 [2024-07-14 04:50:17.047508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.901 [2024-07-14 04:50:17.047713] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.901 [2024-07-14 04:50:17.047733] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.901 [2024-07-14 04:50:17.047746] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.901 [2024-07-14 04:50:17.050657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.901 [2024-07-14 04:50:17.060000] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.901 [2024-07-14 04:50:17.060502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.901 [2024-07-14 04:50:17.060530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.901 [2024-07-14 04:50:17.060546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.901 [2024-07-14 04:50:17.060800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.901 [2024-07-14 04:50:17.061029] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.901 [2024-07-14 04:50:17.061050] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.901 [2024-07-14 04:50:17.061064] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.901 [2024-07-14 04:50:17.063990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.901 [2024-07-14 04:50:17.073161] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.901 [2024-07-14 04:50:17.073622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.901 [2024-07-14 04:50:17.073664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.901 [2024-07-14 04:50:17.073681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.901 [2024-07-14 04:50:17.073927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.901 [2024-07-14 04:50:17.074133] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.901 [2024-07-14 04:50:17.074153] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.901 [2024-07-14 04:50:17.074166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.901 [2024-07-14 04:50:17.077051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.901 [2024-07-14 04:50:17.086573] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.901 [2024-07-14 04:50:17.086974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.901 [2024-07-14 04:50:17.087004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:56.901 [2024-07-14 04:50:17.087020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:56.901 [2024-07-14 04:50:17.087260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:56.901 [2024-07-14 04:50:17.087461] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.901 [2024-07-14 04:50:17.087480] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.901 [2024-07-14 04:50:17.087493] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.164 [2024-07-14 04:50:17.090571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.164 [2024-07-14 04:50:17.099768] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.164 [2024-07-14 04:50:17.100224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.164 [2024-07-14 04:50:17.100253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.164 [2024-07-14 04:50:17.100270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.164 [2024-07-14 04:50:17.100509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.164 [2024-07-14 04:50:17.100709] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.164 [2024-07-14 04:50:17.100729] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.164 [2024-07-14 04:50:17.100742] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.164 [2024-07-14 04:50:17.103735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.164 [2024-07-14 04:50:17.113257] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.164 [2024-07-14 04:50:17.113712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.164 [2024-07-14 04:50:17.113755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.164 [2024-07-14 04:50:17.113771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.164 [2024-07-14 04:50:17.114025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.164 [2024-07-14 04:50:17.114244] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.164 [2024-07-14 04:50:17.114265] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.164 [2024-07-14 04:50:17.114278] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.164 [2024-07-14 04:50:17.117309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.164 [2024-07-14 04:50:17.126372] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.164 [2024-07-14 04:50:17.126889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.164 [2024-07-14 04:50:17.126918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.164 [2024-07-14 04:50:17.126934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.164 [2024-07-14 04:50:17.127174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.164 [2024-07-14 04:50:17.127388] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.164 [2024-07-14 04:50:17.127408] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.164 [2024-07-14 04:50:17.127422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.164 [2024-07-14 04:50:17.130333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.164 [2024-07-14 04:50:17.139532] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.164 [2024-07-14 04:50:17.139945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.164 [2024-07-14 04:50:17.139972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.164 [2024-07-14 04:50:17.139994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.164 [2024-07-14 04:50:17.140230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.164 [2024-07-14 04:50:17.140430] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.164 [2024-07-14 04:50:17.140450] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.164 [2024-07-14 04:50:17.140463] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.164 [2024-07-14 04:50:17.143378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.164 [2024-07-14 04:50:17.152767] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.164 [2024-07-14 04:50:17.153252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.164 [2024-07-14 04:50:17.153280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.164 [2024-07-14 04:50:17.153312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.164 [2024-07-14 04:50:17.153565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.164 [2024-07-14 04:50:17.153771] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.164 [2024-07-14 04:50:17.153790] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.164 [2024-07-14 04:50:17.153803] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.164 [2024-07-14 04:50:17.156677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.164 [2024-07-14 04:50:17.166053] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.164 [2024-07-14 04:50:17.166504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.164 [2024-07-14 04:50:17.166533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.164 [2024-07-14 04:50:17.166549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.164 [2024-07-14 04:50:17.166804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.164 [2024-07-14 04:50:17.167034] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.164 [2024-07-14 04:50:17.167055] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.164 [2024-07-14 04:50:17.167069] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.164 [2024-07-14 04:50:17.170009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.164 [2024-07-14 04:50:17.179814] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.164 [2024-07-14 04:50:17.180265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.164 [2024-07-14 04:50:17.180297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.164 [2024-07-14 04:50:17.180315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.164 [2024-07-14 04:50:17.180553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.164 [2024-07-14 04:50:17.180797] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.164 [2024-07-14 04:50:17.180826] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.164 [2024-07-14 04:50:17.180843] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.165 [2024-07-14 04:50:17.184405] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.165 [2024-07-14 04:50:17.193603] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.165 [2024-07-14 04:50:17.194129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.165 [2024-07-14 04:50:17.194157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.165 [2024-07-14 04:50:17.194174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.165 [2024-07-14 04:50:17.194429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.165 [2024-07-14 04:50:17.194677] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.165 [2024-07-14 04:50:17.194701] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.165 [2024-07-14 04:50:17.194717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.165 [2024-07-14 04:50:17.198287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.165 [2024-07-14 04:50:17.207535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.165 [2024-07-14 04:50:17.207985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.165 [2024-07-14 04:50:17.208017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.165 [2024-07-14 04:50:17.208036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.165 [2024-07-14 04:50:17.208275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.165 [2024-07-14 04:50:17.208519] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.165 [2024-07-14 04:50:17.208553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.165 [2024-07-14 04:50:17.208569] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.165 [2024-07-14 04:50:17.212168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.165 [2024-07-14 04:50:17.221496] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.165 [2024-07-14 04:50:17.221946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.165 [2024-07-14 04:50:17.221978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.165 [2024-07-14 04:50:17.221996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.165 [2024-07-14 04:50:17.222235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.165 [2024-07-14 04:50:17.222479] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.165 [2024-07-14 04:50:17.222504] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.165 [2024-07-14 04:50:17.222519] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.165 [2024-07-14 04:50:17.226117] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.165 [2024-07-14 04:50:17.235463] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.165 [2024-07-14 04:50:17.235938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.165 [2024-07-14 04:50:17.235971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.165 [2024-07-14 04:50:17.235988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.165 [2024-07-14 04:50:17.236227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.165 [2024-07-14 04:50:17.236471] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.165 [2024-07-14 04:50:17.236495] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.165 [2024-07-14 04:50:17.236511] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.165 [2024-07-14 04:50:17.240108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.165 [2024-07-14 04:50:17.249447] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.165 [2024-07-14 04:50:17.249909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.165 [2024-07-14 04:50:17.249940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.165 [2024-07-14 04:50:17.249957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.165 [2024-07-14 04:50:17.250196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.165 [2024-07-14 04:50:17.250439] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.165 [2024-07-14 04:50:17.250463] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.165 [2024-07-14 04:50:17.250479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.165 [2024-07-14 04:50:17.254076] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.165 [2024-07-14 04:50:17.263399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.165 [2024-07-14 04:50:17.263871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.165 [2024-07-14 04:50:17.263903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.165 [2024-07-14 04:50:17.263921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.165 [2024-07-14 04:50:17.264160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.165 [2024-07-14 04:50:17.264403] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.165 [2024-07-14 04:50:17.264427] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.165 [2024-07-14 04:50:17.264443] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.165 [2024-07-14 04:50:17.268037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.165 [2024-07-14 04:50:17.277359] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.165 [2024-07-14 04:50:17.277897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.165 [2024-07-14 04:50:17.277935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.165 [2024-07-14 04:50:17.277958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.165 [2024-07-14 04:50:17.278199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.165 [2024-07-14 04:50:17.278443] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.165 [2024-07-14 04:50:17.278467] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.165 [2024-07-14 04:50:17.278483] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.165 [2024-07-14 04:50:17.282081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.165 [2024-07-14 04:50:17.291403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.165 [2024-07-14 04:50:17.291875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.165 [2024-07-14 04:50:17.291907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.165 [2024-07-14 04:50:17.291925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.165 [2024-07-14 04:50:17.292164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.165 [2024-07-14 04:50:17.292408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.165 [2024-07-14 04:50:17.292432] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.165 [2024-07-14 04:50:17.292448] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.165 [2024-07-14 04:50:17.296039] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.165 [2024-07-14 04:50:17.305350] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.165 [2024-07-14 04:50:17.305942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.165 [2024-07-14 04:50:17.305974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.165 [2024-07-14 04:50:17.305992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.165 [2024-07-14 04:50:17.306231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.165 [2024-07-14 04:50:17.306475] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.165 [2024-07-14 04:50:17.306499] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.165 [2024-07-14 04:50:17.306515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.166 [2024-07-14 04:50:17.310103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.166 [2024-07-14 04:50:17.319213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.166 [2024-07-14 04:50:17.319849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.166 [2024-07-14 04:50:17.319921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.166 [2024-07-14 04:50:17.319940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.166 [2024-07-14 04:50:17.320179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.166 [2024-07-14 04:50:17.320422] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.166 [2024-07-14 04:50:17.320451] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.166 [2024-07-14 04:50:17.320468] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.166 [2024-07-14 04:50:17.324063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.166 [2024-07-14 04:50:17.333164] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.166 [2024-07-14 04:50:17.333636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.166 [2024-07-14 04:50:17.333667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.166 [2024-07-14 04:50:17.333685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.166 [2024-07-14 04:50:17.333937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.166 [2024-07-14 04:50:17.334181] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.166 [2024-07-14 04:50:17.334205] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.166 [2024-07-14 04:50:17.334220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.166 [2024-07-14 04:50:17.337809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.166 [2024-07-14 04:50:17.347124] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.166 [2024-07-14 04:50:17.347601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.166 [2024-07-14 04:50:17.347633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.166 [2024-07-14 04:50:17.347652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.166 [2024-07-14 04:50:17.347940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.166 [2024-07-14 04:50:17.348186] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.166 [2024-07-14 04:50:17.348210] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.166 [2024-07-14 04:50:17.348226] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.166 [2024-07-14 04:50:17.351916] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.428 [2024-07-14 04:50:17.361224] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.428 [2024-07-14 04:50:17.361723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.428 [2024-07-14 04:50:17.361755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.428 [2024-07-14 04:50:17.361773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.428 [2024-07-14 04:50:17.362029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.428 [2024-07-14 04:50:17.362250] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.428 [2024-07-14 04:50:17.362287] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.428 [2024-07-14 04:50:17.362301] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.428 [2024-07-14 04:50:17.365503] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.428 [2024-07-14 04:50:17.375282] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.428 [2024-07-14 04:50:17.375812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.428 [2024-07-14 04:50:17.375854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.428 [2024-07-14 04:50:17.375880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.428 [2024-07-14 04:50:17.376134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.428 [2024-07-14 04:50:17.376378] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.428 [2024-07-14 04:50:17.376402] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.428 [2024-07-14 04:50:17.376417] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.428 [2024-07-14 04:50:17.380008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.428 [2024-07-14 04:50:17.389336] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.428 [2024-07-14 04:50:17.389937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.428 [2024-07-14 04:50:17.389969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.428 [2024-07-14 04:50:17.389987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.428 [2024-07-14 04:50:17.390226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.428 [2024-07-14 04:50:17.390470] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.428 [2024-07-14 04:50:17.390494] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.428 [2024-07-14 04:50:17.390510] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.428 [2024-07-14 04:50:17.394104] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.428 [2024-07-14 04:50:17.403217] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.428 [2024-07-14 04:50:17.403681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.428 [2024-07-14 04:50:17.403713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.428 [2024-07-14 04:50:17.403731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.428 [2024-07-14 04:50:17.403980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.428 [2024-07-14 04:50:17.404224] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.428 [2024-07-14 04:50:17.404248] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.428 [2024-07-14 04:50:17.404264] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.428 [2024-07-14 04:50:17.407851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.428 [2024-07-14 04:50:17.417177] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.428 [2024-07-14 04:50:17.417626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.428 [2024-07-14 04:50:17.417658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.428 [2024-07-14 04:50:17.417676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.428 [2024-07-14 04:50:17.417932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.428 [2024-07-14 04:50:17.418177] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.418202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.418218] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.421797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.431107] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.431552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.431583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.431602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.431841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.432093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.432117] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.432134] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.435720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.445039] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.445501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.445532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.445550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.445788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.446046] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.446071] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.446087] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.449694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.459027] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.459491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.459523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.459541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.459779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.460036] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.460061] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.460083] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.463687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.473013] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.473479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.473517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.473535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.473774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.474030] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.474056] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.474072] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.477657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.486979] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.487418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.487449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.487467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.487705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.487960] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.487986] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.488002] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.491584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.500939] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.501398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.501429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.501447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.501686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.501940] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.501965] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.501982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.505568] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.514892] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.515353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.515389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.515408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.515646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.515900] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.515925] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.515941] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.519526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.528855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.529326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.529358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.529376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.529614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.529858] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.529893] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.529910] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.533495] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.542814] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.543258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.543290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.543308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.543546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.543789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.543814] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.543829] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.547428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.556781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.557276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.557319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.557335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.429 [2024-07-14 04:50:17.557594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.429 [2024-07-14 04:50:17.557844] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.429 [2024-07-14 04:50:17.557879] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.429 [2024-07-14 04:50:17.557898] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.429 [2024-07-14 04:50:17.561489] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.429 [2024-07-14 04:50:17.570811] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.429 [2024-07-14 04:50:17.571280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.429 [2024-07-14 04:50:17.571311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.429 [2024-07-14 04:50:17.571329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.430 [2024-07-14 04:50:17.571568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.430 [2024-07-14 04:50:17.571812] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.430 [2024-07-14 04:50:17.571836] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.430 [2024-07-14 04:50:17.571852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.430 [2024-07-14 04:50:17.575444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.430 [2024-07-14 04:50:17.584760] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.430 [2024-07-14 04:50:17.585228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.430 [2024-07-14 04:50:17.585259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.430 [2024-07-14 04:50:17.585277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.430 [2024-07-14 04:50:17.585515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.430 [2024-07-14 04:50:17.585759] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.430 [2024-07-14 04:50:17.585783] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.430 [2024-07-14 04:50:17.585799] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.430 [2024-07-14 04:50:17.589397] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.430 [2024-07-14 04:50:17.598719] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.430 [2024-07-14 04:50:17.599180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.430 [2024-07-14 04:50:17.599213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.430 [2024-07-14 04:50:17.599231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.430 [2024-07-14 04:50:17.599471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.430 [2024-07-14 04:50:17.599714] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.430 [2024-07-14 04:50:17.599739] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.430 [2024-07-14 04:50:17.599754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.430 [2024-07-14 04:50:17.603356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.430 [2024-07-14 04:50:17.612704] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.430 [2024-07-14 04:50:17.613181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.430 [2024-07-14 04:50:17.613213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.430 [2024-07-14 04:50:17.613231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.430 [2024-07-14 04:50:17.613470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.430 [2024-07-14 04:50:17.613714] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.430 [2024-07-14 04:50:17.613739] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.430 [2024-07-14 04:50:17.613754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.430 [2024-07-14 04:50:17.617449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.692 [2024-07-14 04:50:17.626750] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.692 [2024-07-14 04:50:17.627208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.692 [2024-07-14 04:50:17.627240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.692 [2024-07-14 04:50:17.627259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.692 [2024-07-14 04:50:17.627498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.692 [2024-07-14 04:50:17.627742] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.692 [2024-07-14 04:50:17.627766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.692 [2024-07-14 04:50:17.627782] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.692 [2024-07-14 04:50:17.631374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.692 [2024-07-14 04:50:17.640693] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.692 [2024-07-14 04:50:17.641145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.692 [2024-07-14 04:50:17.641177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.692 [2024-07-14 04:50:17.641195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.692 [2024-07-14 04:50:17.641434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.692 [2024-07-14 04:50:17.641678] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.692 [2024-07-14 04:50:17.641702] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.692 [2024-07-14 04:50:17.641717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.692 [2024-07-14 04:50:17.645316] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.692 [2024-07-14 04:50:17.654631] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.692 [2024-07-14 04:50:17.655096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.692 [2024-07-14 04:50:17.655128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.692 [2024-07-14 04:50:17.655156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.692 [2024-07-14 04:50:17.655397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.692 [2024-07-14 04:50:17.655641] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.692 [2024-07-14 04:50:17.655665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.692 [2024-07-14 04:50:17.655681] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.692 [2024-07-14 04:50:17.659272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.692 [2024-07-14 04:50:17.668582] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.692 [2024-07-14 04:50:17.669033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.692 [2024-07-14 04:50:17.669061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.692 [2024-07-14 04:50:17.669076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.692 [2024-07-14 04:50:17.669325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.669569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.669593] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.669610] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.673203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.682516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.682979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.683010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.683028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.683266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.683510] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.683534] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.683549] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.687144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.696575] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.697050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.697082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.697100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.697340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.697583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.697613] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.697629] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.701224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.710537] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.711025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.711053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.711085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.711344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.711589] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.711613] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.711630] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.715228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.724548] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.725037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.725068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.725086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.725325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.725569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.725593] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.725609] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.729200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.738517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.739013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.739054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.739070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.739325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.739569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.739593] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.739609] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.743203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.752515] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.752947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.752979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.752997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.753236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.753479] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.753503] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.753519] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.757107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.766427] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.766889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.766924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.766942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.767180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.767423] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.767447] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.767463] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.771056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.780371] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.780812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.780840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.780855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.781125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.781370] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.781395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.781411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.785003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.794315] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.794936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.794968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.794986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.795230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.795474] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.795498] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.795515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.799108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.808226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.808666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.808697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.808716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.693 [2024-07-14 04:50:17.808966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.693 [2024-07-14 04:50:17.809211] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.693 [2024-07-14 04:50:17.809234] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.693 [2024-07-14 04:50:17.809250] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.693 [2024-07-14 04:50:17.812838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.693 [2024-07-14 04:50:17.822161] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.693 [2024-07-14 04:50:17.822605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.693 [2024-07-14 04:50:17.822636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.693 [2024-07-14 04:50:17.822653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.694 [2024-07-14 04:50:17.822903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.694 [2024-07-14 04:50:17.823147] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.694 [2024-07-14 04:50:17.823171] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.694 [2024-07-14 04:50:17.823186] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.694 [2024-07-14 04:50:17.826768] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.694 [2024-07-14 04:50:17.836094] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.694 [2024-07-14 04:50:17.836564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.694 [2024-07-14 04:50:17.836595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.694 [2024-07-14 04:50:17.836613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.694 [2024-07-14 04:50:17.836852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.694 [2024-07-14 04:50:17.837106] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.694 [2024-07-14 04:50:17.837131] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.694 [2024-07-14 04:50:17.837152] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.694 [2024-07-14 04:50:17.840736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.694 [2024-07-14 04:50:17.850055] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.694 [2024-07-14 04:50:17.850490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.694 [2024-07-14 04:50:17.850521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.694 [2024-07-14 04:50:17.850539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.694 [2024-07-14 04:50:17.850777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.694 [2024-07-14 04:50:17.851032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.694 [2024-07-14 04:50:17.851057] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.694 [2024-07-14 04:50:17.851073] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.694 [2024-07-14 04:50:17.854655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.694 [2024-07-14 04:50:17.863973] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.694 [2024-07-14 04:50:17.864451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.694 [2024-07-14 04:50:17.864478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.694 [2024-07-14 04:50:17.864494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.694 [2024-07-14 04:50:17.864736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.694 [2024-07-14 04:50:17.864999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.694 [2024-07-14 04:50:17.865025] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.694 [2024-07-14 04:50:17.865041] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.694 [2024-07-14 04:50:17.868624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.694 [2024-07-14 04:50:17.878016] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.694 [2024-07-14 04:50:17.878462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.694 [2024-07-14 04:50:17.878494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.694 [2024-07-14 04:50:17.878512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.694 [2024-07-14 04:50:17.878765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.694 [2024-07-14 04:50:17.879040] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.694 [2024-07-14 04:50:17.879070] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.694 [2024-07-14 04:50:17.879101] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:17.882775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:17.892048] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.955 [2024-07-14 04:50:17.892528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.955 [2024-07-14 04:50:17.892560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.955 [2024-07-14 04:50:17.892578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.955 [2024-07-14 04:50:17.892817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.955 [2024-07-14 04:50:17.893073] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.955 [2024-07-14 04:50:17.893097] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.955 [2024-07-14 04:50:17.893113] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:17.896698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:17.906017] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.955 [2024-07-14 04:50:17.906473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.955 [2024-07-14 04:50:17.906505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.955 [2024-07-14 04:50:17.906523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.955 [2024-07-14 04:50:17.906762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.955 [2024-07-14 04:50:17.907019] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.955 [2024-07-14 04:50:17.907044] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.955 [2024-07-14 04:50:17.907060] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:17.910641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:17.919959] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.955 [2024-07-14 04:50:17.920429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.955 [2024-07-14 04:50:17.920460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.955 [2024-07-14 04:50:17.920478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.955 [2024-07-14 04:50:17.920716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.955 [2024-07-14 04:50:17.920971] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.955 [2024-07-14 04:50:17.920996] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.955 [2024-07-14 04:50:17.921012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:17.924594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:17.933904] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.955 [2024-07-14 04:50:17.934341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.955 [2024-07-14 04:50:17.934373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.955 [2024-07-14 04:50:17.934391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.955 [2024-07-14 04:50:17.934634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.955 [2024-07-14 04:50:17.934894] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.955 [2024-07-14 04:50:17.934918] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.955 [2024-07-14 04:50:17.934934] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:17.938515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:17.947817] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.955 [2024-07-14 04:50:17.948259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.955 [2024-07-14 04:50:17.948291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.955 [2024-07-14 04:50:17.948309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.955 [2024-07-14 04:50:17.948547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.955 [2024-07-14 04:50:17.948791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.955 [2024-07-14 04:50:17.948815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.955 [2024-07-14 04:50:17.948831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:17.952423] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:17.961727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.955 [2024-07-14 04:50:17.962173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.955 [2024-07-14 04:50:17.962205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.955 [2024-07-14 04:50:17.962223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.955 [2024-07-14 04:50:17.962461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.955 [2024-07-14 04:50:17.962705] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.955 [2024-07-14 04:50:17.962729] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.955 [2024-07-14 04:50:17.962744] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:17.966335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:17.975639] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.955 [2024-07-14 04:50:17.976080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.955 [2024-07-14 04:50:17.976111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.955 [2024-07-14 04:50:17.976129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.955 [2024-07-14 04:50:17.976368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.955 [2024-07-14 04:50:17.976611] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.955 [2024-07-14 04:50:17.976635] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.955 [2024-07-14 04:50:17.976656] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:17.980249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:17.989552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.955 [2024-07-14 04:50:17.989997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.955 [2024-07-14 04:50:17.990028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.955 [2024-07-14 04:50:17.990046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.955 [2024-07-14 04:50:17.990284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.955 [2024-07-14 04:50:17.990527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.955 [2024-07-14 04:50:17.990552] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.955 [2024-07-14 04:50:17.990567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:17.994159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:18.003470] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.955 [2024-07-14 04:50:18.003932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.955 [2024-07-14 04:50:18.003973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.955 [2024-07-14 04:50:18.003991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.955 [2024-07-14 04:50:18.004231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.955 [2024-07-14 04:50:18.004475] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.955 [2024-07-14 04:50:18.004499] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.955 [2024-07-14 04:50:18.004515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.955 [2024-07-14 04:50:18.008109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.955 [2024-07-14 04:50:18.017429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 [2024-07-14 04:50:18.017882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.017923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.956 [2024-07-14 04:50:18.017939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.956 [2024-07-14 04:50:18.018213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.956 [2024-07-14 04:50:18.018457] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.956 [2024-07-14 04:50:18.018481] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.956 [2024-07-14 04:50:18.018497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.956 [2024-07-14 04:50:18.022090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.956 [2024-07-14 04:50:18.031401] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 [2024-07-14 04:50:18.031875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.031911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.956 [2024-07-14 04:50:18.031930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.956 [2024-07-14 04:50:18.032169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.956 [2024-07-14 04:50:18.032413] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.956 [2024-07-14 04:50:18.032437] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.956 [2024-07-14 04:50:18.032453] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.956 [2024-07-14 04:50:18.036050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.956 [2024-07-14 04:50:18.045356] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 [2024-07-14 04:50:18.045817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.045848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.956 [2024-07-14 04:50:18.045874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.956 [2024-07-14 04:50:18.046116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.956 [2024-07-14 04:50:18.046359] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.956 [2024-07-14 04:50:18.046383] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.956 [2024-07-14 04:50:18.046399] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.956 [2024-07-14 04:50:18.049988] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.956 [2024-07-14 04:50:18.059293] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 [2024-07-14 04:50:18.059749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.059780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.956 [2024-07-14 04:50:18.059798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.956 [2024-07-14 04:50:18.060048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.956 [2024-07-14 04:50:18.060292] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.956 [2024-07-14 04:50:18.060316] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.956 [2024-07-14 04:50:18.060332] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.956 [2024-07-14 04:50:18.063925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.956 [2024-07-14 04:50:18.073224] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 [2024-07-14 04:50:18.073673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.073714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.956 [2024-07-14 04:50:18.073731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.956 [2024-07-14 04:50:18.074000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.956 [2024-07-14 04:50:18.074251] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.956 [2024-07-14 04:50:18.074275] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.956 [2024-07-14 04:50:18.074291] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.956 [2024-07-14 04:50:18.077878] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2944858 Killed "${NVMF_APP[@]}" "$@" 00:33:57.956 [2024-07-14 04:50:18.087180] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:57.956 [2024-07-14 04:50:18.087647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.087678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.956 [2024-07-14 04:50:18.087696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:57.956 [2024-07-14 04:50:18.087945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.956 [2024-07-14 04:50:18.088189] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.956 [2024-07-14 04:50:18.088213] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.956 [2024-07-14 04:50:18.088229] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.956 [2024-07-14 04:50:18.091810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2945805 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2945805 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2945805 ']' 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:57.956 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:57.956 [2024-07-14 04:50:18.101135] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 [2024-07-14 04:50:18.101605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.101636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.956 [2024-07-14 04:50:18.101654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.956 [2024-07-14 04:50:18.101903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.956 [2024-07-14 04:50:18.102148] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.956 [2024-07-14 04:50:18.102178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.956 [2024-07-14 04:50:18.102195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.956 [2024-07-14 04:50:18.105774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.956 [2024-07-14 04:50:18.115092] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 [2024-07-14 04:50:18.115618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.115646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.956 [2024-07-14 04:50:18.115663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.956 [2024-07-14 04:50:18.115914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.956 [2024-07-14 04:50:18.116160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.956 [2024-07-14 04:50:18.116184] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.956 [2024-07-14 04:50:18.116200] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.956 [2024-07-14 04:50:18.119784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.956 [2024-07-14 04:50:18.129093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 [2024-07-14 04:50:18.129551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.129583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.956 [2024-07-14 04:50:18.129600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.956 [2024-07-14 04:50:18.129839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.956 [2024-07-14 04:50:18.130092] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.956 [2024-07-14 04:50:18.130117] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.956 [2024-07-14 04:50:18.130133] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.956 [2024-07-14 04:50:18.133714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.956 [2024-07-14 04:50:18.139709] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:57.956 [2024-07-14 04:50:18.139787] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.956 [2024-07-14 04:50:18.143200] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.956 [2024-07-14 04:50:18.143688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.956 [2024-07-14 04:50:18.143721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:57.957 [2024-07-14 04:50:18.143739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:57.957 [2024-07-14 04:50:18.143990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:57.957 [2024-07-14 04:50:18.144236] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.957 [2024-07-14 04:50:18.144266] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.957 [2024-07-14 04:50:18.144283] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.217 [2024-07-14 04:50:18.147980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.217 [2024-07-14 04:50:18.157211] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.217 [2024-07-14 04:50:18.157660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.217 [2024-07-14 04:50:18.157692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.217 [2024-07-14 04:50:18.157710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.217 [2024-07-14 04:50:18.157961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.217 [2024-07-14 04:50:18.158205] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.217 [2024-07-14 04:50:18.158230] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.217 [2024-07-14 04:50:18.158246] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.217 [2024-07-14 04:50:18.161828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.217 [2024-07-14 04:50:18.171268] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.217 [2024-07-14 04:50:18.171720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.217 [2024-07-14 04:50:18.171751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.217 [2024-07-14 04:50:18.171769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.217 [2024-07-14 04:50:18.172017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.217 [2024-07-14 04:50:18.172261] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.217 [2024-07-14 04:50:18.172285] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.217 [2024-07-14 04:50:18.172302] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.217 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.217 [2024-07-14 04:50:18.175889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.217 [2024-07-14 04:50:18.185253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.217 [2024-07-14 04:50:18.185711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.217 [2024-07-14 04:50:18.185743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.217 [2024-07-14 04:50:18.185760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.217 [2024-07-14 04:50:18.186029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.217 [2024-07-14 04:50:18.186267] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.217 [2024-07-14 04:50:18.186292] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.217 [2024-07-14 04:50:18.186308] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.217 [2024-07-14 04:50:18.189885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.217 [2024-07-14 04:50:18.198882] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.217 [2024-07-14 04:50:18.199317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.217 [2024-07-14 04:50:18.199347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.217 [2024-07-14 04:50:18.199363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.217 [2024-07-14 04:50:18.199589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.217 [2024-07-14 04:50:18.199796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.217 [2024-07-14 04:50:18.199817] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.217 [2024-07-14 04:50:18.199831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.217 [2024-07-14 04:50:18.203069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.217 [2024-07-14 04:50:18.208350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:58.217 [2024-07-14 04:50:18.212338] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.217 [2024-07-14 04:50:18.212806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.217 [2024-07-14 04:50:18.212837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.217 [2024-07-14 04:50:18.212862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.217 [2024-07-14 04:50:18.213102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.217 [2024-07-14 04:50:18.213328] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.217 [2024-07-14 04:50:18.213350] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.217 [2024-07-14 04:50:18.213365] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.217 [2024-07-14 04:50:18.216456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.217 [2024-07-14 04:50:18.225632] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.217 [2024-07-14 04:50:18.226247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.217 [2024-07-14 04:50:18.226284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.217 [2024-07-14 04:50:18.226304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.217 [2024-07-14 04:50:18.226556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.217 [2024-07-14 04:50:18.226768] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.217 [2024-07-14 04:50:18.226789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.226806] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.229819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.238985] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.239422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.239451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.239477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.239719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.239934] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.218 [2024-07-14 04:50:18.239955] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.239969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.243008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.252289] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.252791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.252821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.252839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.253083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.253310] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.218 [2024-07-14 04:50:18.253331] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.253346] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.256359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.265721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.266400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.266436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.266456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.266710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.266951] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.218 [2024-07-14 04:50:18.266973] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.266990] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.269969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.279115] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.279625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.279654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.279671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.279910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.280124] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.218 [2024-07-14 04:50:18.280155] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.280192] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.283204] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.292380] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.292829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.292874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.292894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.293138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.293372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.218 [2024-07-14 04:50:18.293394] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.293408] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.296434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.297377] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:58.218 [2024-07-14 04:50:18.297410] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:58.218 [2024-07-14 04:50:18.297440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:58.218 [2024-07-14 04:50:18.297451] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:58.218 [2024-07-14 04:50:18.297461] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:58.218 [2024-07-14 04:50:18.297535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:58.218 [2024-07-14 04:50:18.297594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:58.218 [2024-07-14 04:50:18.297597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.218 [2024-07-14 04:50:18.306267] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.306879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.306915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.306936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.307185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.307425] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.218 [2024-07-14 04:50:18.307447] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.307464] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.310752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.319934] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.320582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.320620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.320652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.320890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.321116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.218 [2024-07-14 04:50:18.321139] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.321157] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.324442] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.333544] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.334154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.334205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.334226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.334475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.334695] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.218 [2024-07-14 04:50:18.334716] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.334734] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.337979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.347224] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.347905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.347947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.347967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.348195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.348436] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.218 [2024-07-14 04:50:18.348469] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.218 [2024-07-14 04:50:18.348486] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.218 [2024-07-14 04:50:18.351707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.218 [2024-07-14 04:50:18.360764] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.218 [2024-07-14 04:50:18.361395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.218 [2024-07-14 04:50:18.361433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.218 [2024-07-14 04:50:18.361454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.218 [2024-07-14 04:50:18.361690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.218 [2024-07-14 04:50:18.361936] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.219 [2024-07-14 04:50:18.361969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.219 [2024-07-14 04:50:18.361987] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.219 [2024-07-14 04:50:18.365206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.219 [2024-07-14 04:50:18.374524] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.219 [2024-07-14 04:50:18.375123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.219 [2024-07-14 04:50:18.375159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.219 [2024-07-14 04:50:18.375180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.219 [2024-07-14 04:50:18.375422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.219 [2024-07-14 04:50:18.375641] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.219 [2024-07-14 04:50:18.375672] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.219 [2024-07-14 04:50:18.375690] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.219 [2024-07-14 04:50:18.378987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.219 [2024-07-14 04:50:18.388147] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.219 [2024-07-14 04:50:18.388613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.219 [2024-07-14 04:50:18.388642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.219 [2024-07-14 04:50:18.388659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.219 [2024-07-14 04:50:18.388881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.219 [2024-07-14 04:50:18.389113] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.219 [2024-07-14 04:50:18.389135] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.219 [2024-07-14 04:50:18.389150] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.219 [2024-07-14 04:50:18.392405] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.219 [2024-07-14 04:50:18.401829] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.219 [2024-07-14 04:50:18.402260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.219 [2024-07-14 04:50:18.402294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.219 [2024-07-14 04:50:18.402311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.219 [2024-07-14 04:50:18.402598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.219 [2024-07-14 04:50:18.402821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.219 [2024-07-14 04:50:18.402843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.219 [2024-07-14 04:50:18.402858] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.219 [2024-07-14 04:50:18.406277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.479 [2024-07-14 04:50:18.415601] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.479 [2024-07-14 04:50:18.416037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.479 [2024-07-14 04:50:18.416068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.479 [2024-07-14 04:50:18.416084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.479 [2024-07-14 04:50:18.416314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.479 [2024-07-14 04:50:18.416527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.479 [2024-07-14 04:50:18.416549] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.479 [2024-07-14 04:50:18.416564] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.479 [2024-07-14 04:50:18.419802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.479 [2024-07-14 04:50:18.429217] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.479 [2024-07-14 04:50:18.429669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.479 [2024-07-14 04:50:18.429698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.479 [2024-07-14 04:50:18.429714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.479 [2024-07-14 04:50:18.429969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.479 [2024-07-14 04:50:18.430190] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.479 [2024-07-14 04:50:18.430227] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.479 [2024-07-14 04:50:18.430242] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.479 [2024-07-14 04:50:18.433445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.479 [2024-07-14 04:50:18.438455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:58.479 [2024-07-14 04:50:18.442728] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.479 [2024-07-14 04:50:18.443172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.479 [2024-07-14 04:50:18.443205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.479 [2024-07-14 04:50:18.443222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.479 [2024-07-14 04:50:18.443449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.479 [2024-07-14 04:50:18.443663] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.479 [2024-07-14 04:50:18.443691] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.479 [2024-07-14 04:50:18.443705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.479 [2024-07-14 04:50:18.447107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.479 [2024-07-14 04:50:18.456223] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.479 [2024-07-14 04:50:18.456726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.479 [2024-07-14 04:50:18.456755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.479 [2024-07-14 04:50:18.456771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.479 [2024-07-14 04:50:18.457002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.479 [2024-07-14 04:50:18.457251] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.479 [2024-07-14 04:50:18.457272] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.479 [2024-07-14 04:50:18.457285] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.479 [2024-07-14 04:50:18.460515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.479 [2024-07-14 04:50:18.469752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.479 [2024-07-14 04:50:18.470255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.479 [2024-07-14 04:50:18.470285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.479 [2024-07-14 04:50:18.470302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.479 [2024-07-14 04:50:18.470519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.479 [2024-07-14 04:50:18.470749] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.479 [2024-07-14 04:50:18.470770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.479 [2024-07-14 04:50:18.470785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.479 [2024-07-14 04:50:18.474027] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.479 [2024-07-14 04:50:18.483286] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.479 [2024-07-14 04:50:18.484034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.479 [2024-07-14 04:50:18.484088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.479 [2024-07-14 04:50:18.484109] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.479 [2024-07-14 04:50:18.484362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.479 [2024-07-14 04:50:18.484582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.479 [2024-07-14 04:50:18.484604] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.479 [2024-07-14 04:50:18.484631] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.479 Malloc0 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.479 [2024-07-14 04:50:18.487846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:58.479 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.480 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.480 [2024-07-14 04:50:18.497022] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.480 [2024-07-14 04:50:18.497478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.480 [2024-07-14 04:50:18.497507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cee70 with addr=10.0.0.2, port=4420 00:33:58.480 [2024-07-14 04:50:18.497523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cee70 is same with the state(5) to be set 00:33:58.480 [2024-07-14 04:50:18.497751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cee70 (9): Bad file descriptor 00:33:58.480 [2024-07-14 04:50:18.497993] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.480 [2024-07-14 04:50:18.498016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.480 [2024-07-14 04:50:18.498031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.480 [2024-07-14 04:50:18.501339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.480 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.480 04:50:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.480 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.480 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.480 [2024-07-14 04:50:18.507264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.480 [2024-07-14 04:50:18.510645] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.480 04:50:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.480 04:50:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2945143 00:33:58.480 [2024-07-14 04:50:18.635285] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:08.486 00:34:08.486 Latency(us) 00:34:08.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.486 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:08.486 Verification LBA range: start 0x0 length 0x4000 00:34:08.486 Nvme1n1 : 15.01 6812.58 26.61 9257.02 0.00 7941.87 849.54 24078.41 00:34:08.486 =================================================================================================================== 00:34:08.486 Total : 6812.58 26.61 9257.02 0.00 7941.87 849.54 24078.41 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:08.486 rmmod nvme_tcp 00:34:08.486 rmmod nvme_fabrics 00:34:08.486 rmmod nvme_keyring 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2945805 ']' 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2945805 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 2945805 ']' 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 2945805 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2945805 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2945805' 00:34:08.486 killing process with pid 2945805 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 2945805 00:34:08.486 04:50:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 2945805 00:34:08.486 04:50:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:08.486 04:50:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:08.486 04:50:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:08.486 04:50:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:08.486 04:50:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:08.486 04:50:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.486 04:50:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:08.486 04:50:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.868 04:50:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:10.128 00:34:10.128 real 0m22.201s 00:34:10.128 user 0m59.576s 00:34:10.128 sys 0m4.091s 00:34:10.128 04:50:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:10.128 04:50:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:10.128 ************************************ 00:34:10.128 END TEST nvmf_bdevperf 00:34:10.128 ************************************ 00:34:10.128 04:50:30 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:10.128 04:50:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:10.128 04:50:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:10.128 04:50:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:10.128 ************************************ 00:34:10.128 START TEST nvmf_target_disconnect 00:34:10.128 ************************************ 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:10.128 * Looking for test storage... 00:34:10.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:10.128 04:50:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:12.033 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:12.033 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:12.033 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:12.033 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.033 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.034 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:12.034 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:12.034 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.034 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:12.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:34:12.292 00:34:12.292 --- 10.0.0.2 ping statistics --- 00:34:12.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.292 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:34:12.292 00:34:12.292 --- 10.0.0.1 ping statistics --- 00:34:12.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.292 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:12.292 ************************************ 00:34:12.292 START TEST nvmf_target_disconnect_tc1 00:34:12.292 ************************************ 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:12.292 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.292 [2024-07-14 04:50:32.453346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.292 [2024-07-14 04:50:32.453431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1304520 with addr=10.0.0.2, port=4420 00:34:12.292 [2024-07-14 04:50:32.453472] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:12.292 [2024-07-14 04:50:32.453496] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:12.292 [2024-07-14 04:50:32.453512] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:12.292 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:12.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:12.292 Initializing NVMe Controllers 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:12.292 00:34:12.292 real 0m0.094s 00:34:12.292 user 0m0.043s 00:34:12.292 sys 0m0.050s 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:12.292 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:12.292 ************************************ 00:34:12.292 END TEST nvmf_target_disconnect_tc1 00:34:12.292 ************************************ 00:34:12.550 04:50:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:12.550 04:50:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:12.551 ************************************ 00:34:12.551 START TEST nvmf_target_disconnect_tc2 00:34:12.551 ************************************ 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2948951 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2948951 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2948951 ']' 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:12.551 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.551 [2024-07-14 04:50:32.564598] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:12.551 [2024-07-14 04:50:32.564682] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.551 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.551 [2024-07-14 04:50:32.636552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:12.551 [2024-07-14 04:50:32.722623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.551 [2024-07-14 04:50:32.722690] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.551 [2024-07-14 04:50:32.722718] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.551 [2024-07-14 04:50:32.722729] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.551 [2024-07-14 04:50:32.722738] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.551 [2024-07-14 04:50:32.722788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:12.551 [2024-07-14 04:50:32.722864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:12.551 [2024-07-14 04:50:32.722986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:12.551 [2024-07-14 04:50:32.722990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.809 Malloc0 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.809 [2024-07-14 04:50:32.882313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.809 [2024-07-14 04:50:32.910513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2948980 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:12.809 04:50:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:12.809 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.375 04:50:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2948951 00:34:15.375 04:50:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Write completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.375 Read completed with error (sct=0, sc=8) 00:34:15.375 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 [2024-07-14 04:50:34.934747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 [2024-07-14 04:50:34.935173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Read completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 Write completed with error (sct=0, sc=8) 00:34:15.376 starting I/O failed 00:34:15.376 [2024-07-14 04:50:34.935478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.376 [2024-07-14 04:50:34.935695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.376 [2024-07-14 04:50:34.935727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.376 qpair failed and we were unable to recover it. 00:34:15.376 [2024-07-14 04:50:34.935965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.376 [2024-07-14 04:50:34.935994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.376 qpair failed and we were unable to recover it. 00:34:15.376 [2024-07-14 04:50:34.936183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.376 [2024-07-14 04:50:34.936210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.376 qpair failed and we were unable to recover it. 00:34:15.376 [2024-07-14 04:50:34.936398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.376 [2024-07-14 04:50:34.936424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.376 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.936594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.936620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.936762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.936787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.936987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.937014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.937176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.937202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.937391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.937418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.937606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.937634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.937816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.937848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.938009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.938036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.938186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.938212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.938404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.938430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.938641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.938666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.938847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.938889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.939048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.939074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.939261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.939287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.939516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.939559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.939743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.939769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.939960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.939988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.940176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.940203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.940381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.940407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.940584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.940610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.940771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.940797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.940965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.940992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.941177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.941203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.941450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.941476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.941638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.941664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.941847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.941887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.942060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.942087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.942291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.942335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.942506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.942548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.942821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.942847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.943038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.377 [2024-07-14 04:50:34.943065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.377 qpair failed and we were unable to recover it. 00:34:15.377 [2024-07-14 04:50:34.943255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.943282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.943528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.943572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.943757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.943785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.943995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.944023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.944185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.944211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.944406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.944432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.944675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.944732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.944932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.944960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.945108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.945136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.945365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.945406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.945594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.945625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.945910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.945938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.946126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.946152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.946367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.946396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.946630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.946676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.946961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.946993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.947153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.947190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.947400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.947426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.947609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.947635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.947815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.947841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.948036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.948078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.948306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.948346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.948564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.948591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.948806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.948849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.949027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.949053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.949214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.949240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.949441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.949484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.949743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.949769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.949976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.950003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.950227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.950258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.378 qpair failed and we were unable to recover it. 00:34:15.378 [2024-07-14 04:50:34.950493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.378 [2024-07-14 04:50:34.950519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.950753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.950795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.950988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.951016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.951222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.951267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.951482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.951509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.951726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.951752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.951971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.951998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.952155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.952185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.952378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.952404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.952620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.952647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.952853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.952891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.953057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.953083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.953283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.953338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.953552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.953580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.953760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.953787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.953973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.954001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.954178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.954205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.954357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.954384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.954638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.954664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.954886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.954913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.955117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.955143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.955298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.955326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.955522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.955548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.955759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.955789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.956064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.956091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.956281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.956313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.957190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.957218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.957573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.957647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.957830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.957879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.379 [2024-07-14 04:50:34.958046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.379 [2024-07-14 04:50:34.958073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.379 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.958252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.958279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.958443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.958469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.958680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.958707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.958908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.958935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.959142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.959182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.959379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.959406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.959602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.959628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.959819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.959847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.960077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.960103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.960293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.960320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.960502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.960529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.960711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.960737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.960946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.960973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.961132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.961175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.961392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.961418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.961628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.961654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.961858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.961889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.962049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.962076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.962268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.962294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.962460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.962490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.962731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.962761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.962960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.962986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.963169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.963195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.963384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.963412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.963621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.963647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.963821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.963846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.964019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.964047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.964222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.964248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.380 [2024-07-14 04:50:34.964426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.380 [2024-07-14 04:50:34.964453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.380 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.964663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.964694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.964925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.964952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.965137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.965178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.965399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.965426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.965613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.965640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.965821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.965848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.966016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.966050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.966259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.966286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.966469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.966498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.966725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.966752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.966937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.966965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.967132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.967159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.967348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.967375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.967568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.967594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.381 qpair failed and we were unable to recover it. 00:34:15.381 [2024-07-14 04:50:34.967804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.381 [2024-07-14 04:50:34.967846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.968033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.968062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.968291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.968318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.968526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.968553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.968806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.968847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.969075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.969102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.969305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.969331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.969543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.969585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.969765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.969791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.969946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.969973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.970169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.970196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.970391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.970418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.970627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.970653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.970874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.970902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.971078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.971105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.971256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.971283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.971474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.971500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.971721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.971748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.971930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.971957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.972146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.972173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.972323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.972350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.972497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.972537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.972731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.972773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.972983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.973010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.973203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.973228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.973445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.973477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.973667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.973693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.383 [2024-07-14 04:50:34.973906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.383 [2024-07-14 04:50:34.973933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.383 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.974107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.974134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.974325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.974352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.974569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.974596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.974807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.974834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.975017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.975048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.975208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.975235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.975408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.975434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.975588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.975614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.975804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.975829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.975996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.976023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.976173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.976200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.976456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.976482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.976695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.976725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.976933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.976960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.977112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.977140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.977341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.977368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.977548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.977574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.977751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.977777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.977998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.978025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.978211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.978238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.978435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.978462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.978681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.978707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.978906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.978933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.979108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.979134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.979398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.979428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.979660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.979686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.979854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.979886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.980069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.980096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.980244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.980284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.980495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.980522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.384 [2024-07-14 04:50:34.980737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.384 [2024-07-14 04:50:34.980763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.384 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.980950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.980979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.981162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.981189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.981395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.981421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.981613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.981640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.981904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.981931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.982233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.982282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.982511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.982537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.982701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.982728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.982897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.982925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.983076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.983104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.983349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.983375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.983586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.983613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.983819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.983846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.984075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.984102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.984337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.984366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.984573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.984601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.984842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.984889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.985104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.985133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.985345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.985372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.985686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.985716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.985930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.985958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.986142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.986169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.986337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.986363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.986517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.986544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.986747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.986778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.987017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.987044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.987278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.987307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.987519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.987559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.987765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.987790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.987983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.988010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.988203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.988230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.988421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.988446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.385 [2024-07-14 04:50:34.988676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.385 [2024-07-14 04:50:34.988705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.385 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.988911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.988938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.989095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.989122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.989291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.989321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.989518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.989544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.989767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.989793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.990034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.990064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.990269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.990297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.990496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.990527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.990708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.990735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.990945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.990972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.991186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.991211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.991366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.991392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.991620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.991646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.991862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.991909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.992122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.992149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.992332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.992358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.992568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.992594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.992808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.992834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.993046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.993073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.993257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.993284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.993477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.993504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.993692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.993718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.993972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.993999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.994288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.994315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.994524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.994550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.994735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.994766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.994942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.994985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.995183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.995210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.995406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.995432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.995627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.995654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.995859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.386 [2024-07-14 04:50:34.995890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.386 qpair failed and we were unable to recover it. 00:34:15.386 [2024-07-14 04:50:34.996080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.996107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.996307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.996334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.996552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.996578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.996798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.996828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.997008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.997035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.997238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.997264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.997484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.997525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.997735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.997777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.997962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.997991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.998208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.998248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.998467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.998508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.998718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.998745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.998961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.998988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.999171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.999198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.999433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.999460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.999697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.999726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:34.999903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:34.999935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.000158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.000183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.000330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.000356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.000567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.000610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.000790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.000817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.001006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.001034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.001245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.001271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.001469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.001495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.001763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.001788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.001992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.002020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.002224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.002251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.002451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.002481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.002705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.002746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.002964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.003006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.003223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.387 [2024-07-14 04:50:35.003254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.387 qpair failed and we were unable to recover it. 00:34:15.387 [2024-07-14 04:50:35.003461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.003488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.003671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.003697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.003849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.003894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.004081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.004107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.004321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.004347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.004650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.004679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.004895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.004921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.005126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.005153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.005348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.005374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.005531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.005557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.005767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.005794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.006044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.006070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.006358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.006385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.006591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.006618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.006838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.006869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.007043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.007071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.007279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.007306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.007463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.007491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.007672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.007698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.007888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.007915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.008120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.008150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.008354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.008381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.388 [2024-07-14 04:50:35.008555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.388 [2024-07-14 04:50:35.008582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.388 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.008768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.008795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.009012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.009055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.009231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.009262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.009442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.009469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.009649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.009677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.009891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.009919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.010127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.010171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.010337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.010367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.010597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.010624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.010806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.010833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.011048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.011079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.011310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.011336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.011548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.011575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.011758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.011785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.011967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.011995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.012152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.012180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.012373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.012401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.012581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.012609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.012770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.012797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.012949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.012976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.013162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.013189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.013363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.013390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.013589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.013619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.013849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.013881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.014060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.014087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.014297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.014324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.014485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.014512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.014718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.014745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.014946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.014976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.015157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.015185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.389 [2024-07-14 04:50:35.015386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.389 [2024-07-14 04:50:35.015415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.389 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.015641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.015667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.015832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.015858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.016041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.016068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.016316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.016342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.016546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.016572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.016806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.016835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.017065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.017092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.017300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.017327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.017510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.017537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.017742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.017774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.017954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.017982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.018168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.018200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.018383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.018409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.018585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.018611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.018817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.018843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.019004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.019032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.019240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.019267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.019451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.019478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.019656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.019683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.019890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.019937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.020092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.020119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.020303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.020330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.020477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.020504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.020679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.020706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.020908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.020936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.021149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.021176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.021332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.021359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.021537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.021565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.021750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.021777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.390 [2024-07-14 04:50:35.021969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.390 [2024-07-14 04:50:35.022000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.390 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.022204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.022233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.022393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.022421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.022598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.022625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.022830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.022857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.023048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.023075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.023254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.023282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.023471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.023498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.023677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.023704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.023886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.023914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.024073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.024101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.024260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.024287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.024466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.024494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.024707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.024734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.024916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.024943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.025120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.025146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.025297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.025325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.025535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.025562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.025769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.025796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.026004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.026032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.026245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.026272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.026476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.026502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.026679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.026726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.027032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.027060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.027288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.027318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.027546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.027575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.027758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.027785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.027969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.027998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.028228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.028258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.028436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.028462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.028689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.028715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.028911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.028938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.029086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.391 [2024-07-14 04:50:35.029113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.391 qpair failed and we were unable to recover it. 00:34:15.391 [2024-07-14 04:50:35.029309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.029334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.029533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.029560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.029766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.029792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.030007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.030050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.030236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.030263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.030456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.030482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.030675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.030716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.030925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.030955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.031146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.031173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.031371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.031398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.031600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.031626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.031824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.031851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.032068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.032095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.032259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.032285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.032465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.032492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.032752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.032783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.033025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.033052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.033204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.033232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.033533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.033559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.033779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.033808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.034015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.034043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.034219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.034246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.034472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.034498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.034715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.034755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.034949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.034976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.035153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.035180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.035344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.035371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.035683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.035712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.036003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.036034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.036208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.036240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.036495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.036527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.036753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.036783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.392 qpair failed and we were unable to recover it. 00:34:15.392 [2024-07-14 04:50:35.036974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.392 [2024-07-14 04:50:35.037003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.037209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.037251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.037464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.037493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.037687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.037714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.037906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.037933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.038120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.038147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.038328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.038355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.038533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.038560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.038722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.038749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.038958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.038985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.039157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.039183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.039422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.039453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.039695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.039722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.039914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.039942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.040115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.040141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.040357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.040384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.040577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.040605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.040848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.040885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.041080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.041107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.041302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.041328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.041532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.041559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.041779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.041806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.041984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.042012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.042222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.042249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.042411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.042439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.042661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.042686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.042896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.042926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.043128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.043155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.043543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.043606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.043841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.043872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.044055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.044082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.044240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.044282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.044506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.044532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.044705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.393 [2024-07-14 04:50:35.044731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.393 qpair failed and we were unable to recover it. 00:34:15.393 [2024-07-14 04:50:35.044932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.044959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.045208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.045238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.045462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.045488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.045686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.045716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.045938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.045968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.046168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.046195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.046431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.046457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.046653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.046680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.046862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.046894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.047065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.047091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.047319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.047350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.047552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.047578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.047809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.047838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.048076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.048104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.048311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.048338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.048515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.048545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.048801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.048827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.049039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.049066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.049274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.049303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.049527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.049568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.049787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.049814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.050024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.050052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.050251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-14 04:50:35.050277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-14 04:50:35.050490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.050517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.050744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.050774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.050947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.050974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.051181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.051207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.051466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.051495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.051702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.051732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.051939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.051966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.052133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.052160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.052360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.052387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.052560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.052587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.052800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.052829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.053005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.053035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.053254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.053280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.053431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.053458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.053672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.053699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.053908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.053935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.054152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.054193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.054417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.054444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.054656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.054697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.054933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.054960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.055110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.055141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.055396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.055422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.055698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.055723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.055925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.055952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.056133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.056160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.056408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.056438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.056665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.056695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.056899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.056926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.057105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.057131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.057316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.057345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.057525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.057551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.057726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.057753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.057978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-14 04:50:35.058006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-14 04:50:35.058207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.058234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.058441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.058471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.058667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.058698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.058897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.058925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.059151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.059181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.059378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.059408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.059627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.059654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.059825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.059872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.060083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.060113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.060324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.060351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.060580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.060609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.060842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.060873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.061055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.061082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.061265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.061295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.061528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.061558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.061771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.061798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.062000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.062030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.062196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.062225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.062394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.062422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.062618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.062647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.062813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.062843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.063074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.063101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.063305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.063336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.063544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.063571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.063748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.063779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.064018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.064046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.064277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.064307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.064488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.064520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.064724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.064753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-14 04:50:35.064948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-14 04:50:35.064980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.065194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.065220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.065459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.065488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.065655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.065684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.065916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.065943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.066157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.066187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.066413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.066443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.066651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.066678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.066888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.066918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.067142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.067171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.067380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.067407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.067651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.067677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.067846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.067881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.068082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.068108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.068286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.068315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.068545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.068574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.068768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.068794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.069010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.069041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.069216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.069247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.069474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.069501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.069709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.069738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.069905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.069937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.070120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.070147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.070299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.070325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.070478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.070505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.070715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.070746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.070953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.070981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.071177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.071207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.071408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.071435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.071631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.071660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.071864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.071899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.072097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.072123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.072334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-14 04:50:35.072365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-14 04:50:35.072566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.072597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.072797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.072833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.073054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.073085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.073279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.073309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.073508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.073535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.073693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.073725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.073953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.073984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.074164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.074190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.074425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.074455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.074683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.074712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.074893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.074920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.075146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.075176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.075346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.075376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.075575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.075601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.075755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.075783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.075990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.076021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.076192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.076219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.076448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.076477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.076703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.076733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.076976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.077003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.077203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.077234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.077441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.077472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.077693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.077720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.077955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.077986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.078208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.078238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.078413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.078442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.078672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.078702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.078909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.078937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.079088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.079115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.079286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.079313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.079552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-14 04:50:35.079581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-14 04:50:35.079782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.079808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.080020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.080051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.080275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.080305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.080487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.080514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.080744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.080774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.081003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.081033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.081264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.081290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.081527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.081554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.081764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.081806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.081994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.082022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.082203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.082231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.082438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.082468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.082676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.082703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.082926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.082954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.083113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.083160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.083366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.083394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.083567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.083598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.083761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.083792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.084007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.084035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.084188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.084215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.084430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.084474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.084685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.084712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.084922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.084952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-14 04:50:35.085132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-14 04:50:35.085162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.085338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.085367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.085556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.085583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.085761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.085788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.085965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.085993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.086183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.086210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.086444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.086471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.086652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.086678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.086887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.086931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.087112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.087139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.087336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.087365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.087596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.087626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.087857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.087892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.088091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.088118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.088296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.088326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.088522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.088553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.088788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.088818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.089050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.089077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.089305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.089332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.089516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.089543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.089770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.089800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.090019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.090047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.090228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.090257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.090461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.090491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.090712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.090742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.091003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.091030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.091208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.091234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.091442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.091485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.091670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.091698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.091904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-14 04:50:35.091948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-14 04:50:35.092134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.092178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.092382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.092429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.092632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.092662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.092848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.092898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.093083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.093110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.093289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.093331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.093557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.093584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.093787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.093814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.094070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.094101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.094312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.094343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.094580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.094607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.094815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.094844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.095098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.095128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.095329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.095356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.095561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.095591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.095797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.095827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.096010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.096038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.096267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.096297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.096494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.096524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.096730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.096758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.096943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.096971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.097149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.097176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.097390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.097417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.097618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.097648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.097878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.097905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.098117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.098144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.098337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.098363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.098587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.098617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.098853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.098896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.099122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.099152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-14 04:50:35.099343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-14 04:50:35.099373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.099575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.099603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.099835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.099876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.100101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.100128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.100345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.100372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.100574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.100603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.100828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.100858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.101067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.101094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.101295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.101325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.101522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.101552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.101738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.101765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.101970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.102000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.102238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.102265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.102448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.102475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.102677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.102707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.102932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.102963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.103137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.103164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.103357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.103387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.103619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.103649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.103854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.103887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.104093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.104123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.104343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.104384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.104592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.104621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.104811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.104841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.105077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.105105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.105288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.105315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.105543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.105573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.105798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.105828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.106038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.106065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.106309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.106339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.106538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.106568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.106794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.106821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-14 04:50:35.107062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-14 04:50:35.107092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.107315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.107345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.107545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.107572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.107776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.107806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.108034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.108062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.108266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.108293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.108486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.108518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.108720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.108747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.108960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.108988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.109219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.109249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.109492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.109518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.109699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.109727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.109911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.109938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.110144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.110175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.110380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.110407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.110611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.110638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.110856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.110898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.111094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.111122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.111346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.111376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.111606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.111636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.111876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.111904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.112104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.112135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.112332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.112363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.112591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.112618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.112823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.112854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.113031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.113062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.113241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.113268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.113447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.113474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.113703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.113733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.113920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.113949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.114107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.114134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-14 04:50:35.114355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-14 04:50:35.114386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.114591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.114619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.114791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.114821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.115024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.115055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.115258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.115285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.115512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.115542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.115736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.115765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.115996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.116024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.116223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.116253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.116474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.116504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.116732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.116759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.116933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.116963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.117189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.117219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.117449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.117476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.117683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.117713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.117916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.117951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.118132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.118167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.118395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.118425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.118658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.118699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.118955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.118983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.119142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.119188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.119360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.119391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.119599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.119626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.119838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.119875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.120076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.120106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.120311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.120338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.120622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.120648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.120971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.121004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.121214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.121240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.121457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.121500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.121733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.121759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.121976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.122003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-14 04:50:35.122214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-14 04:50:35.122244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.122444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.122473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.122650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.122678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.122932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.122963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.123172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.123199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.123400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.123427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.123661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.123691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.123900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.123930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.124133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.124163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.124365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.124395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.124604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.124633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.124831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.124870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.125058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.125085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.125321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.125351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.125584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.125611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.125784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.125810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.126006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.126033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.126243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.126270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.126507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.126537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.126772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.126799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.126998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.127026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.127222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.127248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.127463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.127492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.127720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.127751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.127985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.128012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.128172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.128200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-14 04:50:35.128476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-14 04:50:35.128517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.128719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.128749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.128948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.128980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.129202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.129228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.129442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.129485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.129683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.129713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.129909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.129952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.130260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.130286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.130512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.130556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.130793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.130820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.131033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.131065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.131268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.131299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.131478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.131505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.131661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.131688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.131860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.131892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.132106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.132133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.132326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.132356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.132581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.132611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.132815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.132843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.133022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.133052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.133246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.133276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.133458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.133486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.133688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.133719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.133915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.133946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.134159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.134186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.134420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.134450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.134671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.134701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.134939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.134967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.135198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.135229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.135396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.135426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.135623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-14 04:50:35.135650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-14 04:50:35.135883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.135914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.136105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.136135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.136353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.136379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.136608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.136638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.136807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.136839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.137269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.137323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.137522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.137556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.137752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.137781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.137993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.138021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.138215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.138245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.138466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.138495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.138688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.138715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.138935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.138962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.139148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.139175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.139407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.139434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.139647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.139673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.139882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.139913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.140132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.140158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.140361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.140386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.140596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.140627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.140875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.140902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.141100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.141129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.141339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.141380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.141592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.141618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.141849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.141884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.142085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.142115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.142316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.142343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.142564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.142593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.142826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.142880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.143077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.143104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.143336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.143390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.143584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-14 04:50:35.143614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-14 04:50:35.143788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.143815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.143976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.144004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.144202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.144232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.144451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.144494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.144695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.144737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.144936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.144967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.145164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.145194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.145360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.145386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.145620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.145650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.145853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.145888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.146117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.146147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.146346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.146372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.146620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.146649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.146830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.146860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.147080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.147115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.147321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.147363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.147582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.147612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.147793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.147835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.148062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.148092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.148272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.148299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.148528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.148572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.148798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.148828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.149009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.149040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.149274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.149301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.149545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.149574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.149749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.149779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.149949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.149979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.150219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.150246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.150431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.150461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.150659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.150688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.150857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.150893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.151122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.151149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.408 [2024-07-14 04:50:35.151351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.408 [2024-07-14 04:50:35.151380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.408 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.151599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.151629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.151823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.151854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.152054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.152082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.152277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.152303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.152540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.152569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.152804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.152845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.153047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.153075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.153241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.153267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.153455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.153482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.153762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.153792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.154022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.154050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.154219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.154245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.154454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.154480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.154719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.154749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.154940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.154968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.155172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.155202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.155414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.155456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.155624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.155652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.155907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.155935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.156144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.156174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.156374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.156403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.156635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.156680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.156905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.156933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.157118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.157148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.157358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.157385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.157559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.157589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.157821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.157848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.158056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.158086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.158258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.158288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.158460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.158492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.158718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.158745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.158975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.409 [2024-07-14 04:50:35.159005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.409 qpair failed and we were unable to recover it. 00:34:15.409 [2024-07-14 04:50:35.159203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.159234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.159409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.159439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.159643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.159670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.159854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.159888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.160131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.160162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.160360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.160390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.160600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.160627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.160848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.160883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.161063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.161094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.161333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.161362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.161543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.161569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.161752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.161779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.162003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.162033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.162271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.162300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.162499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.162526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.162709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.162736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.162922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.162950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.163155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.163185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.163369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.163396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.163624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.163654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.163883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.163937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.164084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.164111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.164299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.164326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.164493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.164523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.164744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.164774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.164973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.165003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.410 [2024-07-14 04:50:35.165206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.410 [2024-07-14 04:50:35.165233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.410 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.165417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.165444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.165647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.165677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.165848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.165888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.166133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.166160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.166367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.166396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.166565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.166596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.166819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.166849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.167071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.167099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.167302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.167333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.167500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.167530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.167733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.167763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.167961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.167989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.168197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.168227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.168391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.168422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.168650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.168679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.168880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.168908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.169092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.169122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.169323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.169353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.169572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.169602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.169834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.169862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.170076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.170105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.170331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.170360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.170549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.170579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.411 qpair failed and we were unable to recover it. 00:34:15.411 [2024-07-14 04:50:35.170758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.411 [2024-07-14 04:50:35.170786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.171021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.171051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.171406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.171467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.171668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.171698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.171901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.171929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.172135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.172165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.172507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.172559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.172826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.172853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.173038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.173065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.173296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.173323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.173531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.173573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.173800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.173829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.174037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.174064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.174225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.174252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.174460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.174487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.174694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.174721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.174942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.174968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.175122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.175165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.175364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.175394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.175598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.175629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.175846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.175878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.176082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.176111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.176301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.176331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.176518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.176547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.176769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.176795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.176978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.177009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.177209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.177238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.177437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.177467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.177666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.177693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.177872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.177904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.178112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.412 [2024-07-14 04:50:35.178141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.412 qpair failed and we were unable to recover it. 00:34:15.412 [2024-07-14 04:50:35.178337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.178367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.178569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.178597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.178833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.178863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.179077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.179108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.179340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.179370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.179579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.179605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.179811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.179840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.180015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.180046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.180269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.180299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.180499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.180526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.180752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.180782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.180992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.181020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.181252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.181282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.181512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.181539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.181743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.181773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.181954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.181986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.182216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.182246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.182413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.182440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.182645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.182676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.182883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.182910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.183137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.183166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.183348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.183375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.183601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.183630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.183855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.183891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.184104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.184134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.184338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.184365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.184547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.184574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.184790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.184820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.185013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.185045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.185228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.185255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.185426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.185456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.185685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.185715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.185934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.185961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.186141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.186168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-14 04:50:35.186318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.413 [2024-07-14 04:50:35.186346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.186563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.186590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.186778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.186817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.186998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.187027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.187217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.187244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.187430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.187457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.187641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.187667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.187821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.187849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.188058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.188085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.188272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.188299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.188505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.188542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.188710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.188738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.188924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.188952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.189133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.189160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.189361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.189388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.189596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.189622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.189804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.189831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.189988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.190016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.190202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.190229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.190440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.190467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.190647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.190675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.190888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.190916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.191070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.191097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.191277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.191305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.191460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.191487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.191670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.191696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.191879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.191907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.192097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.192125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.192331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.192358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.192536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.192563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-14 04:50:35.192760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.414 [2024-07-14 04:50:35.192787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.192967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.192995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.193174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.193201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.193360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.193387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.193599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.193630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.193840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.193881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.194041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.194068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.194290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.194317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.194486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.194514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.194686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.194712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.194894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.194921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.195076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.195103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.195269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.195297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.195477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.195511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.195718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.195755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.195932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.195959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.196142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.196169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.196346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.196373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.196582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.196609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.196792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.196819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.197004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.197031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.197185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.197212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.197373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.197400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.197613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.197640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.197797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.197825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.198014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.198044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.198227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.198254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.198469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.198496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.198708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.198735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.198927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.198955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.199136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.199164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.199378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.199405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.199558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.199585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.199768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.415 [2024-07-14 04:50:35.199794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-14 04:50:35.199974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.200002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.200241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.200271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.200507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.200537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.200761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.200788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.200972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.201000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.201186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.201213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.201375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.201402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.201560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.201588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.201767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.201794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.202007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.202034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.202225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.202255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.202411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.202439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.202646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.202673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.202852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.202893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.203085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.203111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.203295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.203322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.203506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.203539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.203750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.203778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.203963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.203990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.204149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.204176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.204343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.204370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.204554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.204580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.204782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.204811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.205001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.205028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.205190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.416 [2024-07-14 04:50:35.205217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.416 qpair failed and we were unable to recover it. 00:34:15.416 [2024-07-14 04:50:35.205406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.205433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.205626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.205655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.205838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.205870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.206024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.206051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.206258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.206285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.206493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.206520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.206733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.206759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.206946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.206973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.207156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.207183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.207375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.207402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.207583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.207611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.207788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.207815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.208014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.208043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.208261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.208288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.208493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.208520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.208700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.208727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.208931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.208958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.209142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.209168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.209321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.209353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.209557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.209584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.209742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.209769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.209948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.209975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.210198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.210225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.210403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.210430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.210616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.210656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.210884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.210919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.211123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.211150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.211300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.211326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.211487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.211513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.211665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.211693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.211900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.211928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.212104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.417 [2024-07-14 04:50:35.212131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.417 qpair failed and we were unable to recover it. 00:34:15.417 [2024-07-14 04:50:35.212295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.212322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.212514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.212542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.212720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.212747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.212958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.212986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.213194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.213221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.213409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.213436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.213602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.213629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.213808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.213835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.214001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.214028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.214174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.214201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.214384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.214422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.214598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.214651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.214818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.214848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.215076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.215105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.215316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.215343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.215549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.215575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.215731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.215759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.215957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.215985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.216162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.216189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.216391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.216418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.216599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.216626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.216787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.216814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.217018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.217045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.217252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.217279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.217517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.217547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.217750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.217779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.217976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.218003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.218170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.218198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.218404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.218431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.218589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.218626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.218834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.418 [2024-07-14 04:50:35.218861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.418 qpair failed and we were unable to recover it. 00:34:15.418 [2024-07-14 04:50:35.219060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.219090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.219290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.219320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.219483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.219512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.219727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.219755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.219965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.219993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.220179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.220206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.220361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.220388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.220571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.220598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.220750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.220777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.220948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.220976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.221138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.221165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.221361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.221389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.221568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.221596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.221807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.221837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.221994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.222022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.222232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.222258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.222463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.222492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.222685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.222716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.222889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.222921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.223154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.223180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.223342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.223370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.223552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.223579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.223770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.223797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.223986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.224013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.224204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.224231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.224423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.224450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.224653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.224682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.224834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.224875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.225086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.225113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.225339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.225373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.225610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.225639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.225873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.225900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.226112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.226139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.226322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.226349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.226532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.226559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.419 qpair failed and we were unable to recover it. 00:34:15.419 [2024-07-14 04:50:35.226770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.419 [2024-07-14 04:50:35.226797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.227025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.227055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.227305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.227357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.227566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.227593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.227823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.227850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.228042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.228069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.228281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.228308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.228513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.228540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.228754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.228781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.228979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.229009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.229207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.229236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.229436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.229466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.229674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.229701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.229884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.229911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.230089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.230116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.230299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.230325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.230485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.230512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.230705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.230732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.230937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.230964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.231178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.231208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.231445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.231472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.231668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.231694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.231925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.231955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.232156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.232186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.232383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.232414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.232620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.232647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.232826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.232864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.233048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.233076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.233261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.233290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.233481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.233508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.233696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.233723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.233902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.233929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.234106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.234133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.420 qpair failed and we were unable to recover it. 00:34:15.420 [2024-07-14 04:50:35.234307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.420 [2024-07-14 04:50:35.234334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.234573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.234608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.234818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.234847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.235103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.235130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.235290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.235317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.235496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.235523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.235709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.235738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.235995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.236023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.236202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.236229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.236437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.236464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.236639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.236666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.236824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.236851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.237040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.237067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.237251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.237278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.237469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.237496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.237647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.237674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.237869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.237898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.238078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.238105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.238254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.238281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.238460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.238487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.238718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.238748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.238979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.239020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.239221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.239257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.239481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.239507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.239671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.239700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.239885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.239913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.240073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.240101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.240318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.240345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.240545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.421 [2024-07-14 04:50:35.240571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.421 qpair failed and we were unable to recover it. 00:34:15.421 [2024-07-14 04:50:35.240777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.240821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.241021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.241052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.241251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.241278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.241455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.241482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.241644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.241670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.241853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.241885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.242108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.242136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.242314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.242341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.242511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.242539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.242731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.242758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.242932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.242959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.243162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.243202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.243406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.243439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.243645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.243671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.243870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.243904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.244088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.244121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.244361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.244391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.244618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.244648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.244851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.244887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.245081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.245108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.245273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.245302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.245553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.245582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.245780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.245808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.245964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.246009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.246236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.246265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.246472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.246499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.246719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.246746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.246931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.246958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.247147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.247174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.247364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.247390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.247563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.247589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.422 [2024-07-14 04:50:35.247765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.422 [2024-07-14 04:50:35.247792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.422 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.247947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.247975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.248185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.248212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.248389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.248415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.248597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.248624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.248837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.248877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.249085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.249115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.249294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.249321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.249519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.249546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.249750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.249777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.249934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.249961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.250145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.250174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.250343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.250370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.250560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.250595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.250770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.250797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.250976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.251003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.251183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.251209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.251386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.251413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.251568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.251595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.251801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.251828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.252012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.252039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.252191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.252222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.252410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.252437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.252614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.252646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.252849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.252888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.253057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.253084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.253260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.253287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.253444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.253472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.253658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.253687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.253890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.253920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.254153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.254182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.254357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.254384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.254571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.254597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.254780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.254807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.255012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.255043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.255230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.255257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.255441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.255469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.255678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.255716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.255898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.423 [2024-07-14 04:50:35.255926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.423 qpair failed and we were unable to recover it. 00:34:15.423 [2024-07-14 04:50:35.256107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.256134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.256332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.256363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.256572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.256598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.256784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.256811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.257010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.257039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.257196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.257226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.257437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.257463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.257640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.257668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.257858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.257890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.258074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.258102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.258275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.258302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.258458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.258486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.258698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.258725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.258904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.258931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.259104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.259131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.259338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.259364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.259546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.259572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.259755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.259783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.259974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.260002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.260181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.260211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.260384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.260410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.260588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.260615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.260767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.260804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.260954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.260982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.261156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.261183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.261382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.261411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.261605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.261634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.261844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.261880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.262110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.262137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.262312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.262339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.262517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.262544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.262708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.262735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.262942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.262970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.263181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.263209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.263357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.263384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.263538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.263565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.263750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.263777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.263957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.263988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.264147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.424 [2024-07-14 04:50:35.264175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.424 qpair failed and we were unable to recover it. 00:34:15.424 [2024-07-14 04:50:35.264362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.264389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.264572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.264599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.264769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.264803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.265030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.265058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.265235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.265262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.265450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.265478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.265661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.265688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.265870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.265898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.266075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.266103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.266258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.266294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.266476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.266504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.266683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.266710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.266876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.266904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.267093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.267119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.267299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.267326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.267476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.267503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.267715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.267742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.267962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.267998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.268197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.268224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.268406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.268433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.268612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.268639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.268798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.268840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.269039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.269069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.269296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.269330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.269535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.269565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.269779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.269805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.270043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.270070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.270253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.270280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.270452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.270478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.270655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.270681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.270892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.270922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.271094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.271124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.271323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.271353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.271563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.271595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.271830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.271876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.272088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.272118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.272318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.272348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.272537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.272564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.272778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.272807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.273016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.425 [2024-07-14 04:50:35.273059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.425 qpair failed and we were unable to recover it. 00:34:15.425 [2024-07-14 04:50:35.273270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.273314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.273529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.273556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.273779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.273808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.273999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.274026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.274234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.274264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.274471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.274498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.274696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.274721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.274936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.274966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.275206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.275233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.275491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.275517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.275763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.275792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.276007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.276034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.276266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.276295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.276517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.276558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.276764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.276791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.277005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.277035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.277243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.277273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.277531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.277560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.277763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.277793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.278004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.278031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.278244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.278274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.278461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.278488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.278714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.278744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.279020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.279054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.279256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.279286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.279480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.279506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.279736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.279766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.279927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.279958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.280160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.280190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.426 [2024-07-14 04:50:35.280390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.426 [2024-07-14 04:50:35.280417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.426 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.280648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.280677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.280880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.280910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.281080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.281110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.281340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.281367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.281572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.281601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.281773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.281804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.282015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.282045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.282239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.282266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.282468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.282498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.282667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.282697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.282896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.282927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.283164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.283190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.283379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.283406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.283632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.283661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.283888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.283918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.284146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.284180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.284415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.284445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.284844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.284911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.285120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.285161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.285363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.285390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.285571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.285598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.285779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.285806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.286037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.286068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.286264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.286291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.286486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.286516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.286713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.286743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.286963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.286990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.287195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.287221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.287448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.287477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.287760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.287822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.288037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.288067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.288268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.288295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.288527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.288556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.288776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.288809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.288982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.289010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.289251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.289277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.289530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.289559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.427 [2024-07-14 04:50:35.289784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.427 [2024-07-14 04:50:35.289814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.427 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.290049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.290080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.290290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.290317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.290492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.290523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.290712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.290742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.290934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.290977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.291191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.291218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.291425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.291456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.291677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.291707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.291931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.291960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.292173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.292200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.292412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.292441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.292805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.292874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.293102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.293132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.293339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.293365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.293614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.293654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.293863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.293897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.294087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.294114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.294278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.294305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.294457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.294483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.294636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.294662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.294928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.294959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.295192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.295218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.295425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.295452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.295700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.295729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.295930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.295958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.296162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.296189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.296371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.296401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.296626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.296655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.296881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.296923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.297140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.297181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.297385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.297415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.297622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.297649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.297857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.297905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.298110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.298137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.298368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.298397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.298659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.298715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.298914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.298945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.299175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.299216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.299444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.428 [2024-07-14 04:50:35.299470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.428 qpair failed and we were unable to recover it. 00:34:15.428 [2024-07-14 04:50:35.299620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.299647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.299912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.299942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.300120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.300148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.300340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.300370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.300600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.300630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.300802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.300832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.301008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.301035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.301214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.301239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.301428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.301455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.301693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.301723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.301941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.301969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.302206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.302236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.302511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.302538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.302773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.302803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.303031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.303059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.303266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.303295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.303473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.303504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.303686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.303716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.303939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.303966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.304225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.304251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.304468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.304510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.304722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.304753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.305040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.305067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.305325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.305355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.305708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.305762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.305959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.305989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.306183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.306209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.306419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.306450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.306852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.306910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.307136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.307166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.307359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.307386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.307620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.307649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.307818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.307848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.308079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.308108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.308311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.308338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.308544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.308574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.308771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.308806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.309015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.309043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.309201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.309228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.309431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.309461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.309661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.309691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.429 [2024-07-14 04:50:35.309882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.429 [2024-07-14 04:50:35.309913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.429 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.310121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.310148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.310325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.310351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.310594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.310623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.310849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.310890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.311125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.311152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.311359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.311388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.311562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.311592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.311769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.311800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.311992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.312020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.312187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.312218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.312447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.312476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.312640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.312670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.312873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.312900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.313088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.313115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.313320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.313363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.313526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.313557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.313761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.313789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.313993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.314025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.314218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.314247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.314444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.314475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.314683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.314710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.314896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.314924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.315157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.315187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.315420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.315449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.315647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.315675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.315881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.315909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.316112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.316142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.316377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.316406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.316592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.316619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.316775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.316802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.317007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.317037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.317245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.317272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.317447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.317474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.317705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.317734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.317969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.318000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.318210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.318252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.318488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.318514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.318744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.318774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.319001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.319028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.430 qpair failed and we were unable to recover it. 00:34:15.430 [2024-07-14 04:50:35.319200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.430 [2024-07-14 04:50:35.319244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.319461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.319488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.319733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.319763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.319956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.319987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.320153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.320183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.320380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.320417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.320613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.320640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.320851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.320885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.321145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.321175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.321381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.321409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.321612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.321640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.321827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.321858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.322062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.322092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.322305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.322331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.322518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.322545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.322817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.322847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.323090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.323121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.323320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.323347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.323551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.323582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.323781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.323814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.324005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.324036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.324209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.324237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Write completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Write completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Write completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Write completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Write completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Write completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Write completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Write completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 Read completed with error (sct=0, sc=8) 00:34:15.431 starting I/O failed 00:34:15.431 [2024-07-14 04:50:35.324577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.431 [2024-07-14 04:50:35.324802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14090f0 is same with the state(5) to be set 00:34:15.431 [2024-07-14 04:50:35.325172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.431 [2024-07-14 04:50:35.325217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.431 qpair failed and we were unable to recover it. 00:34:15.431 [2024-07-14 04:50:35.325471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.325498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.325842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.325922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.326139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.326165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.326413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.326441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.326621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.326650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.326881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.326924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.327079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.327105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.327275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.327302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.327472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.327499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.327684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.327711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.327920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.327950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.328175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.328207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.328435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.328462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.328798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.328848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.329066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.329095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.329390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.329416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.329719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.329778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.330004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.330033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.330236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.330263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.330570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.330625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.330874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.330902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.331060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.331086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.331294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.331324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.331508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.331535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.331758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.331785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.331984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.332014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.332190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.332232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.332458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.332485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.332891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.332951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.333224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.333254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.333541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.333569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.333949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.333979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.334214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.334245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.334495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.334522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.334735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.334765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.334951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.334978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.335153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.432 [2024-07-14 04:50:35.335187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.432 qpair failed and we were unable to recover it. 00:34:15.432 [2024-07-14 04:50:35.335387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.335416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.335618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.335648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.335832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.335859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.336078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.336120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.336359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.336387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.336652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.336678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.336933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.336960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.337356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.337402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.337664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.337693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.337980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.338011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.338289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.338318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.338523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.338550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.338808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.338859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.339037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.339066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.339268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.339295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.339491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.339521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.339694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.339724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.339930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.339957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.340132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.340161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.340405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.340431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.340632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.340659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.340876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.340906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.341098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.341127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.341316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.341343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.341545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.341577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.341780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.341810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.342004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.342031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.342239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.342269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.342468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.342498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.342708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.342749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.342956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.342983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.343142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.343185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.343413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.343440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.343646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.343676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.343951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.343982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.344211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.344238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.344462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.344492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.344716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.344745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.344972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.344999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.345208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.345244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.345445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.345475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.433 qpair failed and we were unable to recover it. 00:34:15.433 [2024-07-14 04:50:35.345678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.433 [2024-07-14 04:50:35.345705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.345854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.345908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.346135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.346165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.346366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.346393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.346585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.346614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.346831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.346861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.347070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.347097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.347281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.347309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.347512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.347542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.347813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.347840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.348056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.348086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.348278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.348308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.348570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.348597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.348808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.348838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.349043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.349072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.349259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.349287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.349496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.349526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.349800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.349830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.350066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.350093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.350275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.350305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.350497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.350527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.350762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.350789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.351064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.351099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.351372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.351399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.351584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.351615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.351824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.351851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.352103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.352134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.352359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.352387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.352662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.352692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.352890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.352933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.353115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.353142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.353347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.353377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.353603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.353633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.353805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.353833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.354002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.354030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.354202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.354230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.354488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.354515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.354918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.354948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.355157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.355186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.355393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.355420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.355651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.434 [2024-07-14 04:50:35.355681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.434 qpair failed and we were unable to recover it. 00:34:15.434 [2024-07-14 04:50:35.355894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.355925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.356107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.356134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.356334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.356364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.356638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.356668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.356897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.356924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.357166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.357196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.357465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.357492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.357702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.357730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.357970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.358004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.358178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.358207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.358435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.358462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.358623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.358650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.358857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.358894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.359108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.359135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.359376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.359403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.359582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.359609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.359838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.359885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.360062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.360090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.360290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.360317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.360580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.360607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.360841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.360878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.361114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.361144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.361350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.361377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.361587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.361617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.361829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.361858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.362105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.362133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.362309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.362338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.362539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.362568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.362752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.362779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.362987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.363014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.363232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.363262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.363474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.363501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.363707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.363734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.363886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.435 [2024-07-14 04:50:35.363914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.435 qpair failed and we were unable to recover it. 00:34:15.435 [2024-07-14 04:50:35.364094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.364121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.364327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.364361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.364586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.364616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.364822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.364849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.365061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.365091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.365295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.365325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.365531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.365558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.365790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.365819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.366026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.366057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.366264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.366291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.366560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.366589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.366820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.366850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.367060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.367087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.367274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.367302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.367514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.367544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.367727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.367757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.367944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.367972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.368149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.368184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.368372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.368402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.368569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.368597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.368824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.368853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.369040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.369067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.369256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.369285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.369485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.369515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.369781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.369808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.370047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.370077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.370313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.370343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.370574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.370602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.370810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.370840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.371064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.371094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.371277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.371304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.371461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.371488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.371694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.371724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.371912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.371949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.372139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.372167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.372375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.372405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.372636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.372663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.372909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.372940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.373134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.373164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.373362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.373390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.373611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.436 [2024-07-14 04:50:35.373641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.436 qpair failed and we were unable to recover it. 00:34:15.436 [2024-07-14 04:50:35.373851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.373887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.374103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.374131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.374318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.374345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.374549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.374579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.374780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.374807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.375016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.375046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.375240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.375270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.375481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.375508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.375735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.375765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.375970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.376001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.376209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.376236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.376404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.376434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.376631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.376660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.376858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.376892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.377074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.377101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.377307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.377337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.377539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.377566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.377747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.377778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.377991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.378019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.378172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.378199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.378373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.378404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.378580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.378610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.378811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.378838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.379062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.379092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.379302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.379329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.379479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.379505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.379737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.379767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.379997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.380027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.380228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.380263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.380429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.380456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.380612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.380650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.380840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.380873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.381084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.381114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.381301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.381329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.381537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.381564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.381739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.381770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.381938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.381970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.382171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.382199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.382383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.382411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.382591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.382621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.382834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.382861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.437 qpair failed and we were unable to recover it. 00:34:15.437 [2024-07-14 04:50:35.383045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.437 [2024-07-14 04:50:35.383072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.383289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.383320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.383550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.383578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.383817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.383847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.384064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.384094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.384288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.384323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.384500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.384530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.384758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.384787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.384979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.385019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.385211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.385251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.385444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.385474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.385650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.385677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.385884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.385927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.386136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.386178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.386415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.386446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.386652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.386682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.386908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.386938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.387147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.387174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.387391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.387421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.387625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.387655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.387838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.387871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.388056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.388083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.388370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.388400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.388601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.388628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.388800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.388830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.389013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.389041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.389217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.389244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.389406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.389433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.389663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.389693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.389922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.389950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.390155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.390185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.390387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.390417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.390617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.390644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.390809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.390839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.391020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.391048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.391233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.391260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.391495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.391526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.391734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.391761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.392025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.392053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.392260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.392290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.392507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.392535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.438 qpair failed and we were unable to recover it. 00:34:15.438 [2024-07-14 04:50:35.392739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.438 [2024-07-14 04:50:35.392767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.392941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.392970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.393199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.393229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.393456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.393483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.393686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.393715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.393979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.394010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.394194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.394221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.394422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.394452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.394647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.394677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.394876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.394904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.395103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.395132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.395327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.395356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.395558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.395585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.395814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.395844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.396060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.396088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.396275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.396302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.396576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.396606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.396834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.396864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.397086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.397113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.397290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.397320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.397516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.397546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.397748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.397775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.398006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.398034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.398241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.398271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.398500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.398527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.398739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.398769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.398976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.399004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.399208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.399235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.399448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.399478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.399706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.399736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.399940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.399968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.400172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.400202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.400433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.400460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.400668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.400695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.400927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.400957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.401160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.401189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.439 qpair failed and we were unable to recover it. 00:34:15.439 [2024-07-14 04:50:35.401401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.439 [2024-07-14 04:50:35.401427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.401656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.401686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.401896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.401927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.402107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.402134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.402335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.402365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.402567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.402601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.402788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.402815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.403031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.403060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.403280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.403309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.403508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.403535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.403748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.403777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.403987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.404014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.404224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.404252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.404412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.404440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.404637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.404665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.404861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.404931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.405094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.405120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.405304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.405334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.405534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.405563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.405779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.405809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.405994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.406022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.406197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.406224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.406443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.406472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.406682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.406708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.406924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.406951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.407113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.407158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.407315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.407344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.407542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.407569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.407777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.407806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.408006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.408033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.408229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.408255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.408470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.408499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.408702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.408736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.408937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.408964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.409145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.409174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.409345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.409374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.409601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.409628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.409830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.409859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.410084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.410111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.410324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.410351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.410527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.410553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.440 [2024-07-14 04:50:35.410777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.440 [2024-07-14 04:50:35.410807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.440 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.411017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.411045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.411249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.411278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.411477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.411508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.411717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.411744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.411942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.411969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.412155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.412191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.412382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.412408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.412603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.412633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.412796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.412826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.413033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.413061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.413278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.413307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.413476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.413506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.413706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.413735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.413928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.413955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.414114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.414140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.414328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.414356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.414592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.414621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.414850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.414892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.415093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.415121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.415332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.415363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.415563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.415593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.415786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.415816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.415984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.416011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.416171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.416198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.416382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.416409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.416624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.416653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.416894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.416925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.417130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.417156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.417354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.417383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.417575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.417605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.417873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.417917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.418103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.418130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.418322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.418351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.418556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.418583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.418762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.418788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.418975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.419003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.419185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.419211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.419391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.419420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.419622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.419651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.419843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.419901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.420132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.441 [2024-07-14 04:50:35.420176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.441 qpair failed and we were unable to recover it. 00:34:15.441 [2024-07-14 04:50:35.420396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.420425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.420825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.420889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.421089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.421115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.421353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.421383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.421573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.421599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.421758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.421785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.421957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.421987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.422187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.422213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.422422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.422452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.422674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.422703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.422910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.422937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.423094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.423121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.423352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.423382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.423592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.423618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.423779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.423806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.424042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.424073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.424249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.424276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.424465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.424493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.424677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.424704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.424930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.424958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.425156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.425190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.425379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.425409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.425589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.425616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.425810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.425840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.426058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.426085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.426266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.426293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.426444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.426470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.426659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.426686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.426899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.426926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.427136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.427165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.427390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.427429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.427645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.427672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.427852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.427885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.428108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.428137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.428338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.428365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.428564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.428594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.428806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.428836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.429033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.429061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.429220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.429247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.429473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.429502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.442 qpair failed and we were unable to recover it. 00:34:15.442 [2024-07-14 04:50:35.429709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.442 [2024-07-14 04:50:35.429735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.429946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.429976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.430186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.430212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.430398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.430425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.430630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.430669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.430902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.430933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.431140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.431167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.431376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.431406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.431605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.431634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.431810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.431837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.432063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.432093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.432326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.432356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.432589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.432615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.432823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.432853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.433079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.433110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.433320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.433347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.433522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.433551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.433729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.433758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.433968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.433995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.434227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.434258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.434457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.434486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.434667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.434694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.434851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.434887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.435118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.435148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.435345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.435371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.435574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.435604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.435807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.435836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.436040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.436067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.436222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.436249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.436428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.436455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.436607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.436634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.436842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.436879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.437051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.437077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.437277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.437304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.437462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.437488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.437662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.437689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.437888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.437928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.438089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.438117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.438295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.438322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.438528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.438555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.438725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.438751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.443 [2024-07-14 04:50:35.438907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.443 [2024-07-14 04:50:35.438934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.443 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.439084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.439125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.439325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.439353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.439564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.439591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.439780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.439807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.439984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.440011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.440173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.440199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.440416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.440443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.440641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.440667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.440864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.440896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.441083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.441109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.441318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.441344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.441493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.441520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.441733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.441760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.441915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.441942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.442150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.442176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.442354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.442380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.442591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.442624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.442790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.442818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.442981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.443008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.443212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.443239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.443386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.443412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.443616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.443642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.443824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.443850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.444064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.444093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.444286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.444316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.444585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.444614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.444812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.444855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.445072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.445099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.445281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.445307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.445507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.445536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.445749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.445778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.445975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.446002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.446161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.446188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.446388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.446419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.446619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.444 [2024-07-14 04:50:35.446646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.444 qpair failed and we were unable to recover it. 00:34:15.444 [2024-07-14 04:50:35.446831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.446858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.447056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.447083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.447248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.447276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.447423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.447466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.447670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.447699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.447902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.447931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.448111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.448142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.448298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.448324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.448483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.448511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.448687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.448715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.448909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.448938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.449136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.449174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.449355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.449383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.449600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.449628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.449837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.449864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.450109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.450138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.450366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.450395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.450608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.450635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.450801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.450830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.451033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.451060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.451251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.451278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.451462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.451489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.451690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.451721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.451910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.451937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.452093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.452119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.452282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.452310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.452520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.452548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.452693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.452720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.452923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.452950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.453111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.453138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.453298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.453328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.453536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.453566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.453745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.453781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.453947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.453974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.454125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.454156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.454329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.454356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.454566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.454596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.454807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.454835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.455010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.455037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.455219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.455247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.455396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.445 [2024-07-14 04:50:35.455439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.445 qpair failed and we were unable to recover it. 00:34:15.445 [2024-07-14 04:50:35.455616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.455646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.455862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.455926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.456089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.456115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.456294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.456321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.456505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.456532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.456706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.456734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.456945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.456972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.457151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.457182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.457387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.457422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.457624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.457651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.457882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.457923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.458100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.458138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.458321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.458348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.458529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.458557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.458764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.458791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.458959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.458986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.459147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.459173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.459404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.459434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.459665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.459692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.459849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.459883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.460048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.460075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.460232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.460260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.460499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.460529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.460725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.460755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.460954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.460982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.461134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.461161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.461377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.461405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.461563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.461590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.461785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.461815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.462016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.462043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.462197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.462224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.462408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.462435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.462627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.462654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.462810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.462837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.463016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.463043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.463193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.463225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.463386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.463415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.463623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.463650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.463829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.463856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.464023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.464050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.464244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.464274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.446 [2024-07-14 04:50:35.464437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.446 [2024-07-14 04:50:35.464466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.446 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.464641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.464668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.464829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.464856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.465028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.465055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.465218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.465245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.465390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.465417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.465574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.465601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.465757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.465784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.465964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.465992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.466150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.466179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.466384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.466411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.466562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.466589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.466777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.466804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.467025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.467052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.467195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.467222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.467395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.467425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.467623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.467650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.467833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.467860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.468023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.468050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.468224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.468251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.468411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.468438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.468623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.468650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.468837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.468946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.469108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.469144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.469330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.469357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.469532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.469559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.469773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.469803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.470046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.470076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.470307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.470335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.470487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.470514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.470664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.470718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.470909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.470937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.471130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.471160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.471326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.471368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.471607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.471634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.471846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.471907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.472101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.472133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.472333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.472359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.472515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.472543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.472726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.472753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.472920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.472948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.447 [2024-07-14 04:50:35.473107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.447 [2024-07-14 04:50:35.473145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.447 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.473326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.473353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.473533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.473560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.473770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.473797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.474008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.474036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.474194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.474222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.474381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.474409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.474590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.474617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.474801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.474835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.475029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.475089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.475323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.475369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.475609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.475638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.475800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.475827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.476023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.476050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.476210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.476237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.476413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.476460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.476658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.476706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.476886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.476918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.477088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.477114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.477287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.477313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.477499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.477527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.477690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.477718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.477880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.477909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.478069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.478097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.478257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.478284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.478470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.478500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.478686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.478713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.478912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.478940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.479102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.479129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.479311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.479338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.479494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.479521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.479724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.479754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.479971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.479999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.480160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.480187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.480343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.480375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.480550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.480577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.448 qpair failed and we were unable to recover it. 00:34:15.448 [2024-07-14 04:50:35.480766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.448 [2024-07-14 04:50:35.480809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.480997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.481025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.481187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.481215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.481398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.481425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.481587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.481614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.481765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.481793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.481957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.481985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.482139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.482182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.482389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.482416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.482596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.482623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.482781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.482808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.482976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.483004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.483159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.483186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.483387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.483417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.483619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.483646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.483805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.483833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.484000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.484027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.484188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.484215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.484369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.484396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.484606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.484653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.484893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.484920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.485074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.485101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.485255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.485282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.485442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.485469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.485629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.485658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.485849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.485882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.486034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.486061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.486214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.486240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.486424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.486451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.486604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.486631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.486834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.486861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.487015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.487042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.487204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.487231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.487379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.487406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.487551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.487579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.487782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.487809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.487973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.488001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.488165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.488192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.488347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.488378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.488565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.488616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.488844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.449 [2024-07-14 04:50:35.488881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.449 qpair failed and we were unable to recover it. 00:34:15.449 [2024-07-14 04:50:35.489092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.489120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.489299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.489326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.489530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.489577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.489783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.489810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.489976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.490007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.490177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.490204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.490387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.490415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.490599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.490625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.490818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.490845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.491046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.491073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.491236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.491263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.491449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.491476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.491690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.491716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.491898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.491926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.492087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.492114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.492297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.492324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.492481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.492507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.492664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.492691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.492877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.492905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.493051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.493077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.493247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.493276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.493476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.493503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.493708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.493735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.493940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.493967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.494121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.494148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.494351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.494381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.494582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.494611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.494787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.494813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.495730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.495765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.495972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.496003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.496185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.496213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.496393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.496421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.496600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.496627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.496840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.496876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.497072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.497099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.497288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.497316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.497517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.497558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.497769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.497803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.497989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.498017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.498201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.498227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.498452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.450 [2024-07-14 04:50:35.498498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.450 qpair failed and we were unable to recover it. 00:34:15.450 [2024-07-14 04:50:35.498685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.498720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.498890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.498921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.499078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.499105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.499295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.499322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.499503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.499529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.499769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.499798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.499996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.500024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.500187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.500213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.500396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.500424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.500637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.500664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.500857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.500892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.501056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.501083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.501296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.501325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.501561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.501607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.501807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.501837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.502045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.502072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.502261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.502288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.502448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.502476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.502659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.502689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.502887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.502921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.503078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.503106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.503345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.503374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.503589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.503620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.503821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.503851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.504068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.504095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.504301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.504331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.504561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.504591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.504771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.504800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.505010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.505038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.505208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.505235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.505439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.505468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.505695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.505743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.505960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.505987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.506148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.506175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.506396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.506443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.506667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.506697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.506900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.506960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.507117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.507144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.507331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.507359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.507520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.507547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.507741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.451 [2024-07-14 04:50:35.507771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.451 qpair failed and we were unable to recover it. 00:34:15.451 [2024-07-14 04:50:35.507981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.508008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.508160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.508186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.508331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.508357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.508566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.508593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.508748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.508775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.508959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.508986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.509140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.509166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.509365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.509394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.509633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.509663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.509898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.509941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.510100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.510126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.510332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.510359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.511237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.511271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.511511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.511543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.511793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.511824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.512029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.512057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.512223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.512250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.512468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.512496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.512679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.512707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.512888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.512927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.513092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.513119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.513296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.513325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.513641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.513672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.513837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.513874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.514090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.514118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.514298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.514326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.514595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.514642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.514844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.514882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.515056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.515083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.515264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.515290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.515493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.515520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.515694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.515721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.515949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.515976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.516131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.516169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.516382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.452 [2024-07-14 04:50:35.516410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.452 qpair failed and we were unable to recover it. 00:34:15.452 [2024-07-14 04:50:35.516607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.516643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.516838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.516875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.517064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.517090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.517282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.517312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.517500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.517532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.517731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.517762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.517948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.517976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.518134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.518178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.518383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.518424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.518657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.518719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.518944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.518974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.519162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.519212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.519428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.519483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.519723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.519771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.520004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.520051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.520247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.520293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.520473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.520529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.520716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.520745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.520977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.521023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.521221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.521266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.521501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.521546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.521706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.521734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.521947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.521999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.522182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.522227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.522478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.522532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.522749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.522778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.523012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.523067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.523259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.523304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.523465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.523505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.523714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.523743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.523947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.523993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.524176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.524220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.524471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.524516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.524691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.524717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.524950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.524978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.525154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.525184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.525387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.525426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.525587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.525616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.525839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.525876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.526078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.453 [2024-07-14 04:50:35.526123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.453 qpair failed and we were unable to recover it. 00:34:15.453 [2024-07-14 04:50:35.526306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.526361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.526610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.526654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.526815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.526843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.527046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.527092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.527306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.527350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.527529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.527574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.527791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.527820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.528020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.528075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.528294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.528342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.528556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.528611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.528812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.454 [2024-07-14 04:50:35.528841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.454 qpair failed and we were unable to recover it. 00:34:15.454 [2024-07-14 04:50:35.529053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-14 04:50:35.529102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.529315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.529360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.529594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.529643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.529833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.529861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.530077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.530123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.530361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.530406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.530613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.530673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.530878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.530907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.531065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.531093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.531288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.531316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.531497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.531554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.531737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.531785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.531958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.531986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.532181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.532225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.532467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.532511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.532699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.532726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.532939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.532987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.533176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.533220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.533434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.533480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.533669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.533701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.533923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.533970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.534172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.534226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.534447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.534493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.534679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.534706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.534898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.534945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.535126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.535171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.535383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.535427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.535590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.535618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.535806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.535833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.536040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.536091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.536298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.536352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.536565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.536610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.536795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.536822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.537027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.537074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.537261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.537307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.537544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.537601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.537761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.537789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.537995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.538044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.538250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.538295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.538500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-14 04:50:35.538548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-14 04:50:35.538788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.538817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.539022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.539068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.539273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.539303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.539513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.539559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.539711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.539750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.539981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.540027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.540283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.540328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.540548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.540593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.540765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.540792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.540990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.541037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.541223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.541275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.541526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.541570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.541746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.541775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.541985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.542032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.542194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.542223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.542412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.542458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.542688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.542740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.542959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.542989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.543152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.543206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.543426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.543474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.543645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.543675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.543843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.543879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.544134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.544163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.544337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.544373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.544574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.544613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.544815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.544844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.545038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.545065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.545235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.545262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.545497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.545545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.545749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.545778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.546001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.546029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.546187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.546214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.546390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.546416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.546622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.546648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.546857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.546889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.547053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.547080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.547266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.547293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.547512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.547559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.547760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.547789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.547981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.548009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.548166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-14 04:50:35.548193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-14 04:50:35.548379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.548422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.548610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.548639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.548889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.548951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.549098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.549126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.549333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.549359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.549600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.549648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.549848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.549883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.550060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.550089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.550351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.550398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.550634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.550663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.550928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.550969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.551141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.551170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.551383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.551411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.551572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.551601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.551833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.551878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.552058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.552086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.552300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.552332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.552632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.552663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.552864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.552917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.553077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.553104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.553283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.553311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.553498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.553525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.553739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.553767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.553961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.553989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.554151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.554190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.554349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.554376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.554574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.554604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.554815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.554846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.555035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.555063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.555281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.555309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.555489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.555516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.555723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.555750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.555947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.555975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.556128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.556156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.556373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.556401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.556610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.556660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.556859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.556895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.557064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.557092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.557357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.557385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.557564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.557591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-14 04:50:35.557798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-14 04:50:35.557825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.557982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.558011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.558201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.558250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.558449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.558480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.558767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.558815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.559003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.559030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.559185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.559215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.559410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.559458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.559726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.559775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.559972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.560000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.560157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.560187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.560394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.560420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.560616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.560665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.560907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.560935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.561086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.561113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.561291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.561318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.561596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.561644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.561854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.561891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.562056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.562083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.562304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.562346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.562572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.562599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.562807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.562834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.563001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.563029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.563197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.563224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.563389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.563415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.563687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.563713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.563880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.563908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.564091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.564117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.564296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.564323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.564508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.564535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.564746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.564782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.564939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.564966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.565127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.565154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.565387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.565417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.565637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.565663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.565885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.565941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.566131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.566158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.566380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.566407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.566611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.566660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.566859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.566894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.567063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-14 04:50:35.567091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-14 04:50:35.567350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.567395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.567667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.567722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.567937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.567964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.568168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.568198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.568420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.568450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.568630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.568657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.568858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.568901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.569108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.569146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.569366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.569393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.569624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.569671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.569901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.569933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.570115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.570141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.570341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.570371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.570570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.570597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.570808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.570835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.571081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.571111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.571322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.571349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.571535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.571563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.571772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.571803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.572012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.572042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.572229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.572255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.572513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.572560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.572740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.572766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.572953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.572981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.573159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.573190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.573364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.573395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.573575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.573602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.573785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.573812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.574025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.574053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.574220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.574248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.574477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.574507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.574740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.574769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.740 [2024-07-14 04:50:35.574966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.740 [2024-07-14 04:50:35.574994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.740 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.575169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.575198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.575421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.575448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.575631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.575658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.575860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.575899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.576102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.576132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.576319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.576346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.576519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.576546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.576729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.576758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.576938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.576976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.577210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.577241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.577465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.577495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.577680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.577708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.577943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.577974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.578172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.578202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.578403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.578431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.578673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.578720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.578951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.578981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.579161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.579189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.579424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.579454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.579627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.579657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.579886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.579927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.580139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.580168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.580369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.580399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.580599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.580626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.580862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.580899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.581126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.581154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.581364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.581391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.581583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.581630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.581836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.581872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.582098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.582130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.582313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.582343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.582538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.582569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.582749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.582778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.582966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.582994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.583146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.583173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.583368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.583395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.583607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.583655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.583860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.583895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.584073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.584100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.584326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.584371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.741 qpair failed and we were unable to recover it. 00:34:15.741 [2024-07-14 04:50:35.584539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.741 [2024-07-14 04:50:35.584568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.584773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.584800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.585017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.585049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.585224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.585255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.585465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.585493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.585669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.585696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.585898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.585930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.586158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.586185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.586414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.586449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.586662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.586692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.586920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.586948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.587152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.587182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.587377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.587408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.587640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.587667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.587878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.587920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.588119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.588154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.588333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.588360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.588523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.588551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.588756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.588786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.589019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.589046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.589297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.589326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.589537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.589567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.589775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.589803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.590017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.590047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.590277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.590307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.590489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.590517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.590749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.590779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.591005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.591035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.591264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.591291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.591500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.591530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.591729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.591759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.591968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.591996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.592173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.592205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.592433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.592463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.592675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.592707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.592941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.592986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.593161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.593193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.593405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.593431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.593616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.593646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.593849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.593888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.594127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.742 [2024-07-14 04:50:35.594154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.742 qpair failed and we were unable to recover it. 00:34:15.742 [2024-07-14 04:50:35.594334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.594364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.594560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.594589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.594790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.594818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.595009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.595036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.595193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.595219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.595369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.595397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.595551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.595578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.595762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.595794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.595973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.596001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.596159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.596186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.596365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.596391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.596563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.596590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.596793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.596823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.597055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.597082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.597274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.597301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.597541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.597588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.597815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.597842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.598007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.598035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.598214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.598241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.598446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.598488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.598723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.598750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.598960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.598990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.599187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.599214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.599390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.599417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.599575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.599603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.599766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.599796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.599994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.600021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.600250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.600297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.600532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.600562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.600765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.600791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.600966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.600997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.601193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.601223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.601418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.601445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.601625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.601654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.601860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.601899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.602128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.602155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.602313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.602339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.602487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.602513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.602710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.602736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.602941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.602971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.603177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.743 [2024-07-14 04:50:35.603203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.743 qpair failed and we were unable to recover it. 00:34:15.743 [2024-07-14 04:50:35.603382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.603408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.603669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.603719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.603943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.603971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.604182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.604209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.604457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.604484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.604694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.604724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.604955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.604985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.605192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.605221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.605442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.605471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.605694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.605722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.605961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.605991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.606196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.606224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.606408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.606435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.606651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.606699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.606928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.606958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.607161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.607189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.607421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.607487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.607713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.607742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.607930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.607958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.608188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.608218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.608461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.608491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.608716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.608743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.608952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.608983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.609158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.609188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.609387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.609414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.609568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.609611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.609815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.609845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.610059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.610086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.610293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.610322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.610517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.610546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.610744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.610770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.610965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.610996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.611188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.611218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.611429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.611456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.744 [2024-07-14 04:50:35.611621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.744 [2024-07-14 04:50:35.611650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.744 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.611879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.611918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.612102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.612130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.612301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.612331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.612556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.612585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.612793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.612819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.613025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.613055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.613277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.613304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.613493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.613520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.613677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.613705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.613934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.613964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.614163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.614191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.614416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.614450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.614670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.614699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.614924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.614951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.615148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.615178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.615380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.615410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.615612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.615638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.615816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.615845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.616054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.616084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.616282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.616309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.616506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.616535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.616710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.616737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.616944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.616972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.617211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.617241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.617462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.617492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.617737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.617764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.617947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.617974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.618179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.618209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.618411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.618438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.618647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.618677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.618878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.618908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.619129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.619156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.619357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.619386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.619586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.619613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.619818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.619845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.620040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.620071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.620266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.620296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.620526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.620553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.620761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.620795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.621019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.621050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.745 qpair failed and we were unable to recover it. 00:34:15.745 [2024-07-14 04:50:35.621288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.745 [2024-07-14 04:50:35.621315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.621487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.621517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.621738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.621770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.621940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.621967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.622157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.622185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.622391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.622418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.622601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.622628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.622832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.622858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.623109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.623139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.623340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.623366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.623570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.623601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.623800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.623830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.624048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.624075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.624278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.624307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.624528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.624558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.624760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.624787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.624992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.625023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.625213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.625243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.625425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.625452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.625656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.625686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.625858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.625894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.626076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.626103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.626253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.626280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.626509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.626538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.626751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.626778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.626987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.627018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.627214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.627243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.627443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.627469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.627650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.627678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.627910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.627940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.628121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.628148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.628382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.628411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.628601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.628631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.628807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.628834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.629049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.629079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.629312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.629342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.629544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.629571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.629767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.629797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.630023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.630058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.630265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.630293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.630500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.746 [2024-07-14 04:50:35.630529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.746 qpair failed and we were unable to recover it. 00:34:15.746 [2024-07-14 04:50:35.630732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.630760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.630968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.630996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.631204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.631234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.631401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.631432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.631633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.631660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.631862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.631899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.632131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.632161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.632333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.632360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.632558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.632588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.632811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.632840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.633052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.633080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.633246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.633274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.633507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.633537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.633761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.633788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.634020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.634051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.634258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.634284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.634443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.634471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.634648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.634679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.634879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.634909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.635098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.635125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.635279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.635305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.635534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.635563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.635766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.635793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.636028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.636058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.636263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.636294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.636517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.636543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.636746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.636776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.636977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.637009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.637194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.637221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.637445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.637475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.637649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.637680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.637857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.637892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.638102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.638132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.638335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.638365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.638549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.638576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.638759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.638786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.638963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.638992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.639200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.639231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.639383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.639411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.639648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.639677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.639881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.747 [2024-07-14 04:50:35.639908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.747 qpair failed and we were unable to recover it. 00:34:15.747 [2024-07-14 04:50:35.640114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.640144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.640369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.640398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.640594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.640621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.640830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.640859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.641072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.641102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.641307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.641334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.641515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.641541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.641751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.641781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.642010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.642038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.642216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.642246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.642474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.642503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.642703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.642729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.642899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.642930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.643133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.643162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.643330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.643359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.643553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.643584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.643827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.643853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.644067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.644094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.644328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.644358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.644555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.644586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.644787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.644814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.644975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.645003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.645233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.645262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.645467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.645494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.645697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.645731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.645926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.645957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.646152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.646179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.646360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.646398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.646608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.646637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.646836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.646863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.647073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.647104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.647331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.647361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.647532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.647558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.647752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.647782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.748 [2024-07-14 04:50:35.647990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.748 [2024-07-14 04:50:35.648017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.748 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.648222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.648249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.648451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.648484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.648681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.648710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.648886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.648914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.649074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.649101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.649330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.649360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.649540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.649567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.649769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.649798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.649995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.650026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.650229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.650256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.650462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.650493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.650671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.650701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.650879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.650906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.651070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.651096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.651300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.651327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.651548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.651575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.651799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.651828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.652055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.652082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.655087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.655133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.655332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.655364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.655570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.655597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.655799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.655826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.656038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.656068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.656265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.656295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.656529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.656555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.656732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.656762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.656990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.657018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.657222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.657249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.657419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.657448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.657655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.657685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.657894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.657921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.658107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.658137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.658341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.658368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.658572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.658599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.658806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.658837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.659056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.659083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.659268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.659295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.659505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.659534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.659736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.659765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.749 qpair failed and we were unable to recover it. 00:34:15.749 [2024-07-14 04:50:35.659997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.749 [2024-07-14 04:50:35.660024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.660254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.660284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.660498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.660544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.660798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.660825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.661028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.661081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.661304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.661334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.661557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.661584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.661762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.661793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.662019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.662050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.662267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.662293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.662508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.662538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.662715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.662746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.662913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.662941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.663114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.663144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.663339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.663369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.663596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.663623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.663826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.663856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.664064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.664094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.664296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.664343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.664555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.664585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.664808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.664838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.665071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.665098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.665299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.665328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.665533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.665564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.665796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.665823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.666041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.666073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.666290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.666316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.666504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.666531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.666698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.666727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.666929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.666961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.667184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.667225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.667525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.667552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.667847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.667883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.668094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.668121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.668359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.668386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.668531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.668558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.668769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.668796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.668991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.669022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.669249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.669279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.669464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.669491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.669691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.750 [2024-07-14 04:50:35.669721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.750 qpair failed and we were unable to recover it. 00:34:15.750 [2024-07-14 04:50:35.669991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.670021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.670228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.670260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.670443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.670470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.670625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.670652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.670858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.670893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.671097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.671128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.671298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.671328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.671520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.671547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.671746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.671776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.671942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.671973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.672149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.672175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.672405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.672435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.672666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.672696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.672898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.672925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.673127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.673157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.673370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.673400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.673631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.673658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.673876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.673905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.674121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.674151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.674379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.674406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.674639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.674669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.674877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.674907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.675111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.675138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.675322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.675349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.675550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.675580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.675809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.675836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.676053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.676084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.676259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.676289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.676497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.676524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.676706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.676733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.676934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.676964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.677196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.677223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.677437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.677467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.677668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.677697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.677924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.677951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.678209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.678235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.678478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.678508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.678744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.678771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.678951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.678982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.679183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.679213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.751 qpair failed and we were unable to recover it. 00:34:15.751 [2024-07-14 04:50:35.679411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.751 [2024-07-14 04:50:35.679438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.679664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.679698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.679937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.679964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.680152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.680179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.680382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.680412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.680611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.680641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.680839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.680873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.681047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.681077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.681302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.681332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.681559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.681587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.681767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.681796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.681994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.682025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.682231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.682258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.682463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.682493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.682698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.682725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.682936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.682964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.683163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.683193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.683369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.683399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.683597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.683625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.683821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.683851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.684054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.684084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.684279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.684306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.684533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.684563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.684790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.684820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.685070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.685098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.685340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.685367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.685595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.685625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.685829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.685856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.686082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.686110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.686337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.686367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.686606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.686634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.686808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.686838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.687057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.687085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.687305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.687332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.687499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.687526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.687711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.687750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.687938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.687966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.688205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.688239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.688440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.688472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.688684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.688711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.688872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.752 [2024-07-14 04:50:35.688900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.752 qpair failed and we were unable to recover it. 00:34:15.752 [2024-07-14 04:50:35.689061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.689093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.689251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.689278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.689457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.689484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.689721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.689750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.689932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.689960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.690167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.690197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.690418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.690448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.690657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.690684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.690907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.690938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.691159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.691189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.691413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.691440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.691643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.691674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.691839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.691879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.692101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.692128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.692312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.692339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.692543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.692573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.692775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.692803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.692990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.693019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.693204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.693231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.693437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.693464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.693673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.693704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.693879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.693909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.694101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.694128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.694311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.694338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.694519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.694547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.694723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.694750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.694898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.694926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.695126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.695158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.695363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.695390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.695602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.695632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.695797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.695827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.696017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.696046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.696231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.696258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.696413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.696440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.696623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.753 [2024-07-14 04:50:35.696650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.753 qpair failed and we were unable to recover it. 00:34:15.753 [2024-07-14 04:50:35.696830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.696860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.697063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.697093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.697290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.697319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.697503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.697531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.697700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.697730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.697932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.697964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.698166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.698196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.698386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.698416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.698610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.698637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.698824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.698851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.699014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.699042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.699197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.699224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.699453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.699482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.699679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.699709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.699885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.699914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.700099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.700138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.700325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.700353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.700544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.700572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.700784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.700811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.700989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.701029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.701212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.701240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.701408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.701438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.701669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.701700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.701898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.701927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.702112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.702139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.702321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.702349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.702530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.702557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.702765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.702792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.703018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.703046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.703259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.703286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.703448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.703475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.703655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.703697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.703906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.703934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.704096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.704123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.704311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.704338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.704580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.704607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.704812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.704839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.705000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.705028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.705202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.705229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.705401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.705432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.754 [2024-07-14 04:50:35.705657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.754 [2024-07-14 04:50:35.705684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.754 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.705837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.705873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.706039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.706067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.706270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.706297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.706478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.706505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.706664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.706695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.706903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.706947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.707157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.707184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.707388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.707415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.707595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.707622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.707836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.707863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.708081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.708110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.708337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.708367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.708589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.708617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.708792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.708820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.709011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.709039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.709245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.709272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.709487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.709514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.709672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.709699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.709912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.709939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.710120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.710147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.710328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.710355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.710571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.710597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.710757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.710784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.710978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.711009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.711218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.711245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.711445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.711475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.711706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.711736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.711914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.711941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.712141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.712170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.712395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.712425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.712626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.712653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.712886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.712917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.713117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.713147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.713337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.713363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.713566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.713596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.713820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.713850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.714064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.714091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.714323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.714353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.714550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.714579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.714800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.714827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.755 qpair failed and we were unable to recover it. 00:34:15.755 [2024-07-14 04:50:35.715043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.755 [2024-07-14 04:50:35.715074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.715306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.715336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.715551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.715578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.715782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.715809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.716018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.716050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.716226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.716254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.716460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.716489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.716686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.716716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.716907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.716935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.717122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.717150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.717356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.717383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.717532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.717559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.717755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.717785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.717958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.717988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.718212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.718239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.718471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.718500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.718697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.718727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.718954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.718981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.719171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.719202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.719398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.719428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.719662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.719689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.719896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.719927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.720127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.720156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.720365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.720392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.720620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.720649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.720850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.720887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.721094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.721121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.721353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.721383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.721585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.721613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.721796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.721823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.722015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.722043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.722210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.722238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.722423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.722450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.722657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.722684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.722841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.722875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.723082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.723109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.723286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.723316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.723518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.723548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.723733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.723760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.723960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.723991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.724166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.724196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.756 qpair failed and we were unable to recover it. 00:34:15.756 [2024-07-14 04:50:35.724414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.756 [2024-07-14 04:50:35.724441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.724615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.724642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.724826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.724855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.725105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.725136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.725296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.725323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.725528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.725555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.725737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.725765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.725949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.725977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.726194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.726224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.726439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.726466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.726682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.726712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.726916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.726947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.727166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.727193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.727355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.727397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.727620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.727650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.727883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.727910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.728144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.728174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.728388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.728416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.728600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.728627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.728861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.728899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.729100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.729131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.729363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.729390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.729615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.729645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.729882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.729912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.730133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.730160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.730363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.730390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.730637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.730667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.730876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.730904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.731081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.731109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.731267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.731294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.731478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.731505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.731711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.731737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.731917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.731948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.732147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.732175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.732356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.732383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.757 qpair failed and we were unable to recover it. 00:34:15.757 [2024-07-14 04:50:35.732564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.757 [2024-07-14 04:50:35.732591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.732765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.732792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.732992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.733023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.733216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.733246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.733479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.733506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.733716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.733743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.733932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.733960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.734139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.734165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.734329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.734364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.734598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.734628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.734834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.734861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.735081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.735111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.735335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.735365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.735565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.735592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.735775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.735803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.735981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.736009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.736218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.736245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.736456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.736487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.736686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.736719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.736942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.736970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.737173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.737203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.737428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.737457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.737684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.737711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.737929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.737960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.738154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.738185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.738383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.738411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.738593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.738621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.738829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.738859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.739064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.739091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.739272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.739299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.739478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.739505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.739665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.739692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.739876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.739903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.740060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.740106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.740314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.740341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.740507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.740535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.740715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.740743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.740932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.740961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.741121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.741149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.741351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.741378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.741552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.758 [2024-07-14 04:50:35.741579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.758 qpair failed and we were unable to recover it. 00:34:15.758 [2024-07-14 04:50:35.741727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.741754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.741924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.741954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.742153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.742180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.742358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.742389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.742595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.742624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.742821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.742848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.743015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.743042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.743247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.743290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.743482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.743509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.743695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.743722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.743889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.743917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.744121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.744148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.744331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.744360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.744561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.744592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.744808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.744835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.745095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.745122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.745339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.745366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.745550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.745577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.745753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.745780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.745964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.746008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.746187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.746215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.746422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.746449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.746656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.746683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.746846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.746879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.747064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.747093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.747301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.747328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.747567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.747594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.747774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.747801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.748011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.748039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.748243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.748270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.748510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.748536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.748748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.748791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.748997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.749025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.749238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.749266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.749424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.749455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.749817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.749887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.750093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.750120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.750367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.750394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.750602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.750629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.750807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.750834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.759 [2024-07-14 04:50:35.751026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.759 [2024-07-14 04:50:35.751054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.759 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.751235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.751263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.751417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.751444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.751650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.751692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.751896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.751924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.752104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.752131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.752314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.752341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.752521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.752548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.752736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.752763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.752992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.753022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.753250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.753277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.753501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.753527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.753682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.753708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.753862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.753902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.754106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.754136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.754328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.754357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.754533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.754561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.754764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.754791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.754975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.755003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.755208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.755235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.755445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.755474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.755656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.755686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.755882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.755910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.756115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.756142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.756323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.756350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.756512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.756540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.756746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.756791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.757020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.757051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.757267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.757294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.757473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.757500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.757683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.757710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.757876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.757905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.758095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.758122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.758332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.758359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.758565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.758596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.758772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.758803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.758968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.759000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.759202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.759230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.759445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.759472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.759628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.759654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.759863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.759898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.760 [2024-07-14 04:50:35.760103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.760 [2024-07-14 04:50:35.760133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.760 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.760312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.760343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.760568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.760595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.760749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.760777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.760945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.760973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.761222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.761249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.761461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.761490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.761696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.761726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.761920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.761948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.762148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.762179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.762357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.762386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.762583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.762609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.762792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.762819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.763025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.763053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.763222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.763249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.763450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.763481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.763711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.763741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.763972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.764000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.764210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.764240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.764437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.764467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.764673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.764701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.764884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.764912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.765061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.765088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.765263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.765291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.765472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.765502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.765736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.765767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.765975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.766003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.766185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.766216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.766417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.766447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.766659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.766686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.766839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.766872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.767078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.767105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.767290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.767317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.767492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.767539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.767740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.767770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.761 [2024-07-14 04:50:35.767987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.761 [2024-07-14 04:50:35.768015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.761 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.768208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.768238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.768440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.768467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.768648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.768675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.768891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.768919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.769112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.769142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.769344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.769371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.769573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.769603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.769800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.769830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.770018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.770047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.770228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.770255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.770430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.770458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.770617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.770644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.770849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.770887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.771108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.771138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.771335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.771361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.771509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.771536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.771719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.771746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.771908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.771935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.772094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.772121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.772302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.772329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.772543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.772570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.772774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.772801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.772986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.773014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.773198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.773225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.773405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.773433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.773638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.773668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.773873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.773900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.774078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.774106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.774268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.774295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.774447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.774475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.774623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.774650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.774844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.774881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.775084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.775111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.775291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.775320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.775490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.775518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.775726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.775752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.775966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.776011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.776212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.776247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.776450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.776477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.776652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.776682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.762 qpair failed and we were unable to recover it. 00:34:15.762 [2024-07-14 04:50:35.776888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.762 [2024-07-14 04:50:35.776919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.777119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.777146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.777312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.777342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.777541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.777571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.777746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.777773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.777985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.778013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.778241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.778271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.778502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.778530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.778711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.778738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.778918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.778946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.779178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.779208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.779409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.779440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.779664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.779694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.779931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.779959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.780132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.780160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.780341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.780368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.780570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.780597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.780886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.780917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.781152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.781182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.781352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.781379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.781582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.781610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.781813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.781843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.782019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.782046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.782255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.782282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.782470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.782499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.782680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.782707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.782932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.782962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.783198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.783227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.783425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.783452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.783628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.783655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.783824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.783851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.784045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.784073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.784278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.784308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.784535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.784565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.784774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.784802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.784985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.785012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.785198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.785226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.785409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.785441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.785649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.785679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.785850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.785888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.786114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.786141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.763 qpair failed and we were unable to recover it. 00:34:15.763 [2024-07-14 04:50:35.786362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.763 [2024-07-14 04:50:35.786392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.786548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.786578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.786783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.786810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.786998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.787026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.787185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.787213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.787394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.787421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.787641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.787671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.787893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.787934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.788129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.788158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.788343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.788371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.788534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.788561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.788713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.788740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.788930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.788959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.789189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.789219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.789443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.789470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.789677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.789704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.789851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.789885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.790071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.790099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.790255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.790282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.790488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.790518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.790713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.790740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.790912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.790944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.791143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.791173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.791398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.791425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.791607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.791633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.791810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.791838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.792023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.792050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.792285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.792315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.792498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.792527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.792735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.792762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.792945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.792973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.793160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.793191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.793361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.793388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.793572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.793599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.793749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.793776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.793987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.794016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.794198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.794229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.794391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.794435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.794670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.794698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.794932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.794963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.795182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.764 [2024-07-14 04:50:35.795213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.764 qpair failed and we were unable to recover it. 00:34:15.764 [2024-07-14 04:50:35.795442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.795469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.795629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.795657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.795860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.795898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.796077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.796104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.796340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.796370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.796576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.796606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.796831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.796858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.797018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.797045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.797215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.797243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.797449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.797476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.797666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.797695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.797926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.797956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.798166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.798193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.798379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.798406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.798610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.798640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.798840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.798873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.799031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.799059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.799243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.799271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.799478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.799505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.799688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.799715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.799878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.799905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.800113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.800140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.800357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.800388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.800579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.800609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.800798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.800827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.801023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.801052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.801262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.801290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.801469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.801496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.801677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.801704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.801910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.801938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.802095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.802123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.802302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.802329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.802542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.802585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.802796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.802823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.803008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.803036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.803193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.803225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.803410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.803436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.803637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.803668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.803906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.803934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.765 qpair failed and we were unable to recover it. 00:34:15.765 [2024-07-14 04:50:35.804092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.765 [2024-07-14 04:50:35.804119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.804274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.804301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.804507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.804534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.804690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.804717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.804902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.804930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.805162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.805192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.805423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.805450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.805633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.805660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.805888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.805918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.806154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.806181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.806344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.806371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.806530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.806557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.806697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.806724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.806925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.806953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.807197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.807224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.807404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.807431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.807637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.807664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.807819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.807847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.808038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.808066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.808238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.808268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.808440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.808470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.808645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.808675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.808838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.808883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.809108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.809154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.809339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.809368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.809543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.809573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.809774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.809801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.809982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.810010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.810163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.810190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.810341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.810367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.810543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.810569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.810750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.810777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.810967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.810997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.811202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.811229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.811387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.811414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.811600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.811626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.811806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.766 [2024-07-14 04:50:35.811833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.766 qpair failed and we were unable to recover it. 00:34:15.766 [2024-07-14 04:50:35.812037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.812066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.812274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.812301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.812515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.812542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.812720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.812748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.812931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.812958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.813142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.813169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.813342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.813374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.813741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.813792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.814013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.814041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.814200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.814227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.814408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.814435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.814662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.814689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.814895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.814923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.815128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.815169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.815357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.815386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.815614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.815644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.815831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.815858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.816049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.816076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.816229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.816256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.816435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.816479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.816657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.816684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.816860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.816904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.817061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.817088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.817294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.817321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.817499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.817525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.817679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.817706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.817884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.817912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.818097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.818125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.818330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.818373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.818552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.818579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.818739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.818766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.818951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.818979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.819131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.819160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.819347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.819374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.819553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.819580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.819797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.819823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.820008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.820037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.820240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.820269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.820476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.820503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.820682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.820709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.767 qpair failed and we were unable to recover it. 00:34:15.767 [2024-07-14 04:50:35.820875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.767 [2024-07-14 04:50:35.820903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.821087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.821115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.821289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.821316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.821597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.821647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.821888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.821916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.822100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.822128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.822339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.822367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.822523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.822550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.822757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.822784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.822969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.822996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.823174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.823201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.823435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.823465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.823812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.823876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.824082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.824114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.824294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.824321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.824537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.824564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.824773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.824800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.824982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.825009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.825189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.825216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.825365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.825392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.825543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.825570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.825777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.825804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.825953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.825982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.826161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.826188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.826366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.826393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.826581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.826608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.826790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.826819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.827007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.827035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.827215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.827243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.827450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.827479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.827675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.827705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.827901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.827929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.828103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.828131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.828317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.828344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.828534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.828565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.828754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.828781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.828963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.828991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.829195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.829221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.829373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.829400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.829689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.829749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.829957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.829985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.768 qpair failed and we were unable to recover it. 00:34:15.768 [2024-07-14 04:50:35.830170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.768 [2024-07-14 04:50:35.830198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.830396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.830426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.830636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.830663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.830853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.830887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.831100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.831126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.831345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.831374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.831555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.831583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.831737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.831764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.831949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.831976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.832155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.832182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.832424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.832451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.832634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.832660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.832843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.832885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.833044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.833071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.833251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.833278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.833481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.833510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.833708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.833737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.833906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.833934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.834093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.834120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.834353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.834383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.834598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.834624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.834780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.834808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.834987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.835015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.835208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.835235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.835437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.835467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.835643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.835672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.835875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.835904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.836063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.836089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.836291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.836321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.836491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.836519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.836757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.836786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.836989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.837019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.837191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.837218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.837401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.837428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.837601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.837628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.837811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.837838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.837998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.838025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.838206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.838233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.838414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.838440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.838624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.838651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.838862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.769 [2024-07-14 04:50:35.838898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.769 qpair failed and we were unable to recover it. 00:34:15.769 [2024-07-14 04:50:35.839088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.839114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.839291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.839318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.839528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.839555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.839739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.839766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.839938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.839966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.840210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.840240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.840462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.840489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.840706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.840733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.840935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.840963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.841261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.841325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.841525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.841554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.841743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.841777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.842000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.842028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.842235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.842262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.842413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.842441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.842651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.842679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.842886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.842914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.843075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.843102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.843282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.843308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.843486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.843513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.843690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.843717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.843877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.843906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.844097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.844124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.844300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.844327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.844508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.844539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.844704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.844732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.844882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.844911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.845118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.845145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.845298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.845327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.845509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.845537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.845721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.845758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.845966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.845994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.846880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.846923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.847115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.847155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.847360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.770 [2024-07-14 04:50:35.847387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.770 qpair failed and we were unable to recover it. 00:34:15.770 [2024-07-14 04:50:35.847550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.847576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.847734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.847761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.847920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.847951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.848137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.848164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.848370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.848398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.848576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.848602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.848807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.848834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.849046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.849075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.849321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.849351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.849550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.849579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.849775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.849803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.850020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.850051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.850240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.850269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.850478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.850505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.850687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.850715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.850897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.850939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.851100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.851131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.851290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.851318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.851476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.851503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.851657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.851684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.851858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.851892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.852096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.852137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.852335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.852362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.852573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.852599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.852759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.852788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.852970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.852997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.853155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.853182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.853409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.853438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.853641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.853670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.853879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.853919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.854136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.854163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.854345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.854372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.854578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.854608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.854780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.854810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.771 qpair failed and we were unable to recover it. 00:34:15.771 [2024-07-14 04:50:35.855035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.771 [2024-07-14 04:50:35.855062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.855243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.855286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.855516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.855546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.855729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.855757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.855931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.855958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.856118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.856150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.856329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.856355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.856559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.856589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.856762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.856789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.856940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.856967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.857129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.857161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.857343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.857370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.857522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.857549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.857734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.857767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.857975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.858003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.858184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.858211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.858388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.858415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.858597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.858625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.858803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.858830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.859001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.859029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.859210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.859241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.859472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.859499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.859678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.859720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.859942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.859972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.860164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.860191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.860427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.860457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.860657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.860686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.860914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.860942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.861159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.861187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.861366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.861394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.861603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.861630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.861801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.861831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.862072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.862099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.862288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.862315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.862494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.862521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.862719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.862748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.772 qpair failed and we were unable to recover it. 00:34:15.772 [2024-07-14 04:50:35.862965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.772 [2024-07-14 04:50:35.862993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.863174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.863202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.863404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.863435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.863634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.863663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.863898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.863927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.864125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.864154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.864321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.864350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.864530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.864557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.864720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.864749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.864953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.864980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.865139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.865166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.865309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.865336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.865542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.865569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.865765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.865794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.865974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.866003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.866171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.866197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.866421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.866448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.866612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.866640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.866832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.866859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.867106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.867133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.867317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.867344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.867523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.867550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.867733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.867761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.867964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.867992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.868194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.868221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.868378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.868404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.868628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.868660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.868873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.868900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.869108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.869135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.869308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.869335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.869517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.869543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.869704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.869733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.869907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.869934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.870140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.870167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.870350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.870377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.870554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.870581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.773 qpair failed and we were unable to recover it. 00:34:15.773 [2024-07-14 04:50:35.870761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.773 [2024-07-14 04:50:35.870788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.870968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.870995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.871204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.871230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.871388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.871415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.871611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.871638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.871814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.871841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.871999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.872026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.872209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.872235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.872418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.872446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.872624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.872651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.872829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.872857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.873052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.873079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.873234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.873262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.873465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.873492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.873670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.873697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.873850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.873886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.874076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.874103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.874255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.874282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.874461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.874490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.874637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.874664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.874851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.874886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.875038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.875066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.875247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.875274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.875432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.875459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.875632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.875659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.875875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.875903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.876081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.876107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.876286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.876313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.876489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.876516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.876706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.876733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.876886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.876918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.877099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.877126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.877306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.877333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.877514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.877541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.877745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.774 [2024-07-14 04:50:35.877772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.774 qpair failed and we were unable to recover it. 00:34:15.774 [2024-07-14 04:50:35.877949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.877976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.878132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.878159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.878336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.878363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.878544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.878571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.878777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.878803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.878984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.879011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.879195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.879222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.879430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.879457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.879636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.879664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.879821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.879848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.880030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.880058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.880239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.880266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.880449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.880476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.880649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.880676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.880861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.880907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.881092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.881119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.881273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.881300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.881453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.881489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.881673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.881700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.881885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.881914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.882068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.882095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.882280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.882307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.882501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.882528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.882684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.882712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.882863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.882897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.883105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.883132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.883311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.883338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.883497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.883524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.883700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.883727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.775 [2024-07-14 04:50:35.883888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.775 [2024-07-14 04:50:35.883916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.775 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.884104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.884131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.884313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.884340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.884523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.884550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.884733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.884760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.884938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.884966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.885144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.885176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.885333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.885360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.885584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.885611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.885816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.885844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.886053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.886081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.886264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.886291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.886448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.886475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.886689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.886716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.886879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.886906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.887083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.887110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.887347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.887375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.887558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.887586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.887801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.887831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.888011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.888039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.888223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.888251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.888456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.888486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.888712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.888742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.888970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.888998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.889182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.889209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.889360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.889388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.889595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.889622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.889827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.889857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.890070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.890098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.890303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.890330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.890510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.890537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.890768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.890798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.891006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.891034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.891223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.891250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.776 qpair failed and we were unable to recover it. 00:34:15.776 [2024-07-14 04:50:35.891452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.776 [2024-07-14 04:50:35.891481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.891660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.891687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.891904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.891942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.892174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.892204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.892402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.892430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.892667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.892697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.892931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.892960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.893135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.893162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.893340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.893367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.893620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.893650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.893879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.893907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.894094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.894121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.894357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.894391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.894602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.894629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.894832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.894861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.895077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.895106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.895290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.895317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.895499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.895527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.895758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.895788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.896023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.896051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.896255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.896285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.896508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.896537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.896713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.896740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.896956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.896984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.897173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.897200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.897407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.897433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.897643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.897673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.897855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.897890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.898081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.898108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.898285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.898312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.898537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.898567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.898794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.898821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.899002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.899029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.899204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.899231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.899409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.899436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.899624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.899651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.899858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.899892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.900078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.900105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.900264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.900291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.900514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.900540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.777 [2024-07-14 04:50:35.900719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.777 [2024-07-14 04:50:35.900744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.777 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.900935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.900961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.901139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.901166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.901385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.901412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.901624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.901653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.901887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.901931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.902115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.902142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.902345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.902375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.902610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.902639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.902890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.902918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.903125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.903151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.903334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.903361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.903613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.903644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.903827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.903858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.904069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.904099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.904330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.904357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.904510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.904539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.904720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.904749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.904926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.904954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:15.778 [2024-07-14 04:50:35.905130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.778 [2024-07-14 04:50:35.905174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:15.778 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.905382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.905410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.905617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.905643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.905887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.905931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.906118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.906159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.906389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.906414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.906589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.906616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.906843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.906878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.907085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.907110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.907278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.907304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.907516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.907544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.907742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.907767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.907983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.908010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.908194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.908220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.908419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.908446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.908653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.908682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.908876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.908923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.909144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.909172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.909377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.909407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.909606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.909636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.909844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.909879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.910049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.910075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.910286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.910315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.910527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.910554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.910767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.910797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.911010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.911039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.911264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-14 04:50:35.911291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-14 04:50:35.911498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.911527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.911695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.911725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.911964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.911991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.912177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.912206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.912418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.912447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.912652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.912679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.912889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.912937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.913134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.913161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.913358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.913385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.913624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.913651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.913857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.913905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.914083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.914110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.914280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.914308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.914539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.914568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.914754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.914781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.914962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.914990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.915168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.915195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.915431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.915458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.915638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.915670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.915878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.915915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.916104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.916141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.916343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.916371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.916553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.916585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.916815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.916842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.917061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.917089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.917311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.917338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.917523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.917550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.917778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.917805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.917985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.918012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.918196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.918223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.918398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.918425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.918645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.918675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.918847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.918886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.919099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.919125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.919347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.919375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.919578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.919607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.919829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.919859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.920093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.920120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.920322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.920351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.920546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.920575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-14 04:50:35.920759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-14 04:50:35.920787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.920995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.921023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.921250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.921280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.921467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.921495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.921676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.921703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.921862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.921895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.922057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.922088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.922389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.922418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.922643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.922690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.922919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.922946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.923110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.923136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.923356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.923383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.923579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.923605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.923771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.923798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.923991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.924018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.924176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.924202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.924385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.924413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.924719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.924772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.924980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.925007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.925187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.925214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.925421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.925447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.925649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.925675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.925882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.925928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.926115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.926141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.926368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.926394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.926627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.926656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.926898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.926940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.927148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.927174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.927337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.927363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.927548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.927575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.927803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.927830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.928030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.928058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.928267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.928296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.928499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.928526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.928739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.928768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.929016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.929044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.929224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.929250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.929473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.929500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.929759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.929788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.929983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.930010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.930162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-14 04:50:35.930190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-14 04:50:35.930422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.930449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.930629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.930655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.930880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.930924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.931110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.931136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.931311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.931337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.931545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.931574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.931756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.931784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.931989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.932017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.932204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.932233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.932476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.932504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.932678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.932706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.932892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.932924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.933109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.933136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.933325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.933352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.933592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.933621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.933842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.933891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.934110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.934136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.934359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.934403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.934673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.934719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.934959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.934987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.935140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.935179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.935427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.935456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.935661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.935688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.935875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.935902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.936062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.936089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.936253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.936279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.936471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.936499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.936712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.936739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.936929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.936956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.937137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.937164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.937355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.937383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.937547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.937574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.937755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.937789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.937959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.937986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.938143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.938173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.938350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.938403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.938636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.938665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.938849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.938895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.939078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.939104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.939310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.939337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-14 04:50:35.939544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-14 04:50:35.939570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.939724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.939750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.939958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.939986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.940171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.940198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.940387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.940413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.940596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.940622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.940836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.940881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.941068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.941097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.941289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.941318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.941513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.941539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.941694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.941720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.941912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.941939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.942118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.942144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.942333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.942359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.942518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.942545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.942753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.942780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.942987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.943013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.943196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.943222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.943405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.943431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.943590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.943617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.943801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.943827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.943996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.944024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.944230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.944257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.944436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.944463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.944650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.944676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.944832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.944880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.945092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.945118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.945325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.945357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.945538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.945564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.945769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.945797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.946037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.946065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.946277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.946303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.946509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.946541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-14 04:50:35.946749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-14 04:50:35.946775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.946989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.947016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.947174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.947200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.947383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.947409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.947596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.947622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.947801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.947828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.948037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.948064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.948295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.948336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.948561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.948587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.948766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.948792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.948987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.949014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.949238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.949267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.949458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.949488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.949697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.949723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.949962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.949992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.950220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.950247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.950425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.950453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.950632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.950667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.950887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.950914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.951112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.951142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.951321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.951350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.951579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.951617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.951819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.951854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.952080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.952125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.952339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.952366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.952547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.952573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.952786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.952812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.953067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.953094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.953307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.953333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.953551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.953580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.953797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.953827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.954067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.954094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.954284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.954310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.954486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.954513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.954681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.954708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.954919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.954946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.955140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.955175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.955372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.955420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.955600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.955626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.955838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-14 04:50:35.955876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-14 04:50:35.956057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.956083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.956248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.956276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.956434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.956461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.956637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.956665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.956845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.956878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.957062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.957089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.957233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.957260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.957438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.957464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.957613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.957640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.957801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.957828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.958038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.958064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.958258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.958285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.958446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.958473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.958685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.958712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.958921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.958948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.959105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.959131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.959336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.959363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.959544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.959571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.959756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.959785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.959980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.960010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.960246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.960273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.960452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.960479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.960659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.960685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.960890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.960923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.961146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.961184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.961422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.961452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.961633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.961660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.961889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.961929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.962116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.962145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.962348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.962375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.962589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.962616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.962769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.962796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.963013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.963040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.963186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.963214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.963399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.963426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.963606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.963634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.963840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.963873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.964077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.964106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.964288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.964314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.964517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.964548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-14 04:50:35.964737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-14 04:50:35.964764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.965013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.965040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.965233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.965260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.965446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.965473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.965679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.965706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.965913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.965943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.966139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.966175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.966345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.966372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.966574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.966602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.966775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.966802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.967006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.967033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.967250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.967277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.967432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.967459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.967642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.967669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.967909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.967939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.968177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.968207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.968379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.968414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.968597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.968623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.968827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.968853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.969019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.969045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.969202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.969228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.969434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.969460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.969642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.969681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.969859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.969892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.970124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.970164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.970388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.970414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.970604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.970631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.970784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.970811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.971028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.971058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.971241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.971268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.971475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.971502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.971654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.971691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.971853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.971886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.972125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.972160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.972423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.972452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.972638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.972665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.972849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.972894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.973106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.973133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.973363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.973393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.973599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.973631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.973811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.973839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-14 04:50:35.974037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-14 04:50:35.974067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.974305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.974335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.974517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.974545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.974704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.974731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.974913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.974941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.975125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.975151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.975381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.975410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.975644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.975674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.975854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.975890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.976070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.976097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.976299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.976329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.976535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.976562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.976775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.976805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.977042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.977069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.977256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.977283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.977483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.977512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.977686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.977715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.977914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.977941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.978148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.978186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.978386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.978415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.978645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.978672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.978883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.978923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.979122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.979154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.979363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.979390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.979594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.979621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.979840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.979877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.980059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.980086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.980255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.980282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.980437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.980464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.980638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.980665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.980892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.980923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.981150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.981180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.981360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.981387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.981542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.981568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.981797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.981826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.982038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.982065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-14 04:50:35.982282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-14 04:50:35.982312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.982513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.982542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.982717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.982748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.982929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.982960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.983160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.983190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.983381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.983408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.983572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.983599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.983782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.983809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.983966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.983993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.984152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.984179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.984359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.984387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.984591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.984618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.984827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.984857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.985092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.985133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.985371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.985398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.985576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.985606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.985793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.985824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.986073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.986100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.986307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.986336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.986498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.986527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.986731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.986758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.986988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.987018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.987258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.987288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.987483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.987510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.987714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.987744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.987924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.987955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.988183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.988210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.988419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.988450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.988645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.988675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.988916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.988943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.989107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.989133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.989282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.989310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.989489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.989517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.989731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.989760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.989955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.989985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.990224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.990250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.990438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.990465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.990694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.990724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-14 04:50:35.990913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-14 04:50:35.990940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.991141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.991171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.991367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.991396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.991579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.991605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.991801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.991836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.992033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.992059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.992242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.992268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.992467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.992494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.992709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.992738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.992951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.992978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.993135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.993163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.993368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.993398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.993621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.993648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.993880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.993921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.994095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.994136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.994371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.994398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.994605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.994634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.994831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.994861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.995066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.995092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.995301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.995331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.995529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.995558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.995736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.995763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.995965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.996016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.996183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.996214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.996423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.996450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.996628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.996654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.996826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.996857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.997061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.997087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.997287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.997316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.997538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.997568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.997798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.997825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.998040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.998071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.998242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.998272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.998458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.998484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.998689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.998716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.998939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.998969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.999168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.999195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.999370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.999400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.999588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.999617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:35.999821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:35.999848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:36.000103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-14 04:50:36.000145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-14 04:50:36.000341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.000371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.000576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.000603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.000786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.000813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.001029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.001060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.001243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.001271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.001438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.001468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.001662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.001692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.001921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.001947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.002150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.002181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.002347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.002378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.002606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.002633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.002860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.002897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.003109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.003149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.003349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.003376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.003579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.003608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.003807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.003837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.004060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.004087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.004282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.004312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.004510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.004540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.004745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.004772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.004960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.004987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.005186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.005216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.005418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.005444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.005666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.005696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.005922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.005952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.006156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.006183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.006370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.006396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.006602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.006631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.006858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.006891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.007120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.007151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.007353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.007384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.007612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.007638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.007823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.007850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.008069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.008099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.008327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.008354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.008597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.008627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.008805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.008835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.009032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.009060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.009241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.009272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.009444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.009474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.009674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-14 04:50:36.009702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-14 04:50:36.009929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.009960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.010138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.010169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.010391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.010422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.010601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.010640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.010812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.010843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.011058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.011085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.011261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.011291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.011518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.011545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.011726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.011753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.011955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.011985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.012193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.012221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.012428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.012455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.012685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.012715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.012920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.012947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.013138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.013165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.013370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.013400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.013610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.013641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.013836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.013863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.014051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.014080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.014307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.014337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.014528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.014563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.014731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.014767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.014954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.014981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.015198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.015225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.015406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.015435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.015671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.015701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.015908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.015936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.016138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.016168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.016353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.016389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.016614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.016641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.016875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.016905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.017138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.017167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.017370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.017397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.017558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.017586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.017791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.017821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.018008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.018035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.018235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.018264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.018489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.018518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.018746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.018773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.018952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.018983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-14 04:50:36.019211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-14 04:50:36.019240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.019444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.019471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.019696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.019730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.019957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.019987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.020158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.020186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.020390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.020420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.020620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.020649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.020846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.020880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.021109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.021138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.021336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.021366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.021578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.021605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.021815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.021844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.022024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.022051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.022208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.022236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.022387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.022415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.022623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.022653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.022875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.022903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.023085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.023115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.023325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.023354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.023581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.023608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.023817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.023847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.024047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.024077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.024286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.024313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.024482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.024513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.024714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.024744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.024973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.025001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.025187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.025217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.025430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.025458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.025666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.025693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.025909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.025940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.026114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.026143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.026346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.026375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.026605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.026632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.026812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.026838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.027028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.027056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-14 04:50:36.027287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-14 04:50:36.027317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.027560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.027589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.027774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.027800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.027969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.027998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.028189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.028219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.028430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.028457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.028664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.028706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.028912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.028948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.029153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.029180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.029384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.029414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.029615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.029645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.029827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.029853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.030044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.030072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.030253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.030283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.030493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.030520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.030725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.030752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.031002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.031033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.031215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.031242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.031425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.031453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.031682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.031712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.031892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.031919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.032133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.032164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.032361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.032391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.032601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.032627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.032833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.032863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.033105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.033133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.033293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.033321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.033487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.033517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.033692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.033723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.033909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.033938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.034098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.034126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.034332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.034358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.034539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.034566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.034774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.034803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.035005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.035036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.035262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.035289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.035502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.035532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.035737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.035766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.035950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.035978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.036159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.036186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.036386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.036415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-14 04:50:36.036611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-14 04:50:36.036638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.036811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.036843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.037047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.037077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.037274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.037301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.037533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.037563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.037769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.037798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.038001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.038033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.038243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.038273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.038479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.038509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.038710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.038736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.038912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.038943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.039173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.039203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.039412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.039439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.039620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.039657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.039887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.039918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.040103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.040130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.040314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.040341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.040529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.040560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.040730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.040757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.040958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.040989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.041227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.041255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.041456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.041483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.041714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.041743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.041979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.042010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.042181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.042208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.042392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.042421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.042656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.042686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.042889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.042917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.043075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.043103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.043283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.043310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.043493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.043520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.043726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.043753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.043971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.044001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.044236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.044263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.044463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.044493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.044687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.044717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.044916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.044943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.045182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.045209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.045414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.045456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.045695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.045722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.045956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-14 04:50:36.045987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-14 04:50:36.046215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.046245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.046470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.046497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.046656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.046683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.046916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.046946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.047169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.047197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.047363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.047394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.047577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.047604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.047781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.047808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.047961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.047989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.048198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.048241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.048471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.048498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.048723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.048753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.048980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.049011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.049190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.049217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.049423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.049466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.049701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.049728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.049929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.049957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.050132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.050159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.050313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.050339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.050524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.050552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.050781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.050811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.051041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.051068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.051248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.051277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.051484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.051514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.051738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.051767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.051975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.052002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.052233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.052263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.052478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.052505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.052712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.052739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.052911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.052942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.053144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.053174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.053381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.053408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.053585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.053626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.053843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.053882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.054081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.054108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.054267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.054294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.054524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.054553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.054739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.054766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.054972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.055003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.055232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.055261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.055493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-14 04:50:36.055519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-14 04:50:36.055724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.055754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.055977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.056008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.056247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.056274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.056506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.056536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.056768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.056798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.057034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.057062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.057238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.057268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.057429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.057458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.057634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.057661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.057832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.057861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.058078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.058105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.058311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.058338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.058544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.058573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.058779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.058807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.059015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.059043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.059249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.059279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.059506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.059535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.059712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.059740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.059979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.060009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.060186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.060215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.060396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.060434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.060649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.060679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.060877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.060907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.061140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.061167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.061322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.061350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.061577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.061607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.061846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.061879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.062094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.062123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.062292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.062322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.062516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.062543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.062720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.062747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.062902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.062951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.063124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.063152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.063353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.063383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.063602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-14 04:50:36.063632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-14 04:50:36.063863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.063901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.064105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.064136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.064351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.064388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.064590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.064617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.064819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.064848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.065060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.065089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.065319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.065346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.065567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.065597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.065792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.065821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.066020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.066048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.066253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.066283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.066483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.066513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.066714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.066741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.066951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.066982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.067174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.067203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.067373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.067400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.067581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.067608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.067787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.067818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.068070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.068098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.068300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.068329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.068531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.068561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.068759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.068786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.068991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.069021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.069230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.069260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.069435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.069462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.069654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.069684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.069857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.069893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.070101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.070128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.070335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.070379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.070560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.070589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.070787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.070814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.071048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.071079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.071307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.071337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.071512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.071539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.071765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.071795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.071979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.072010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.072235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.072268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.072497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.072527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.072690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.072719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.072951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.072979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-14 04:50:36.073177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-14 04:50:36.073207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.073403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.073432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.073661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.073687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.073920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.073950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.074121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.074151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.074378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.074405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.074588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.074617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.074819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.074849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.075083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.075111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.075300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.075329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.075560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.075590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.075804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.075832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.075999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.076027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.076229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.076273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.076455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.076482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.076664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.076691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.076915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.076946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.077151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.077178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.077385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.077415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.077581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.077611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.077820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.077847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.078018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.078045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.078256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.078285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.078496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.078523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.078723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.078752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.078989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.079020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.079200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.079229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.079431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.079461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.079676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.079706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.079920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.079948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.080121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.080148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.080302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.080330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.080536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.080564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.080776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.080804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.081012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.081056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.081287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.081314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.081524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.081558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.081765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.081796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.081972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.082001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.082202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.082232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.082455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.082485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-14 04:50:36.082698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-14 04:50:36.082725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.082930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.082973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.083163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.083194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.083399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.083426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.083609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.083636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.083837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.083875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.084081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.084109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.084292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.084322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.084528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.084555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.084761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.084788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.085033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.085064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.085286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.085316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.085482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.085508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.085686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.085716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.085924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.085952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.086135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.086162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.086345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.086373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.086525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.086555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.086741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.086768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.086995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.087026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.087226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.087257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.087439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.087467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.087651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.087679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.087892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.087936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.088163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.088191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.088390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.088420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.088582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.088613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.088836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.088863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.089110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.089140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.089337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.089367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.089566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.089592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.089821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.089850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.090062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.090094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.090271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.090299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.090457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.090485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.090718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.090753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.090968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.090996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.091228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.091258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.091462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.091492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.091697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.091724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.091935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-14 04:50:36.091965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-14 04:50:36.092181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.092207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.092363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.092389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.092561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.092588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.092801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.092828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.093016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.093043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.093221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.093249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.093437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.093467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.093671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.093698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.093914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.093945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.094117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.094147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.094377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.094403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.094590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.094616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.094825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.094855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.095085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.095112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.095348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.095377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.095582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.095613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.095842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.095877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.096118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.096148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.096322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.096353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.096558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.096586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.096772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.096799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.096963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.096990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.097148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.097175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.097381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.097411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.097610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.097639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.097840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.097873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.098082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.098112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.098347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.098377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.098585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.098612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.098819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.098848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.099068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.099099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.099309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.099335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.099542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.099585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.099791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.099821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-14 04:50:36.100050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-14 04:50:36.100082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.100289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.100320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.100515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.100546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.100777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.100804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.101015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.101047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.101254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.101284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.101484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.101510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.101711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.101741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.101937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.101967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.102169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.102196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.102340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.102367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.102567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.102596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.102760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.102787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.102969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.102997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.103201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.103231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.103417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.103444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.103616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.103643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.103882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.103913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.104118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.104145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.104348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.104378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.104543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.104573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.104797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.104824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.105007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.105038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.105273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.105303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.105513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.105540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.105717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.105747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.105975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.106003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.106185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.106212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.106416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.106445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.106643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.106672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.106889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.106916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.107127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.107156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.107388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.107415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.107616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.107643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.107848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.107885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.108089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.108119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.108349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.108376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.108557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.108584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.108811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.108841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.109074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.109102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.109306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-14 04:50:36.109341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-14 04:50:36.109543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.109574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.109798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.109825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.110046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.110075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.110277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.110304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.110483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.110510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.110712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.110741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.110940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.110971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.111176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.111203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.111408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.111437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.111613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.111644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.111842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.111874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.112084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.112113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.112336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.112366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.112572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.112599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.112821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.112852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.113061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.113092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.113323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.113350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.113538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.113565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.113740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.113770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.113996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.114024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.114209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.114236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.114449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.114479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.114680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.114706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.114893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.114924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.115123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.115152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.115356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.115382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.115591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.115621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.115796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.115825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.116035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.116062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.116285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.116314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.116488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.116518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.116712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.116738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.116941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.116971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.117177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.117207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.117403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.117430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.117637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.117666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.117892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.117922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.118100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.118127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.118354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.118383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.118584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.118621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.118819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-14 04:50:36.118846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-14 04:50:36.119072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.119102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.119328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.119357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.119584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.119611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.119798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.119825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.120022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.120049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.120236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.120263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.120472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.120501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.120695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.120725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.120928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.120956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.121141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.121168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.121378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.121408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.121599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.121626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.121808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.121838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.122027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.122055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.122240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.122267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.122473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.122503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.122723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.122753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.122955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.122983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.123164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.123191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.123371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.123398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.123603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.123630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.123813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.123840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.124027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.124054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.124233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.124259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.124443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.124470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.124745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.124775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.124976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.125004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.125239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.125268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.125511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.125538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.125802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.125828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.126081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.126108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.126281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.126308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.126491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.126518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.126727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.126756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.126982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.127012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.127216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.127243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.127449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.127480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.127680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.127708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.127863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.127901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.128132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.128162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-14 04:50:36.128384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-14 04:50:36.128414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.128646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.128673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.128890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.128921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.129097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.129127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.129326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.129354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.129560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.129589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.129816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.129846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.130124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.130151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.130386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.130415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.130624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.130651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.130829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.130857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.131023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.131051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.131287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.131317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.131520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.131547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.131702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.131729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.131889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.131916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.132066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.132093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.132301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.132343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.132571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.132600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.132830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.132857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.133069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.133099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.133329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.133356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.133564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.133592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.133781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.133810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.134022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.134050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.134217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.134244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.134405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.134433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.134632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.134662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.134890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.134918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.135132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.135162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.135390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.135420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.135627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.135654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.135838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.135876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.136079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.136109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.136339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.136366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.136568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.136595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.136822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.136852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.137060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.137087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.137286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.137321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.137509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.137538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.137721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.137748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-14 04:50:36.137963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-14 04:50:36.138007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.138240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.138266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.138453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.138480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.138668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.138699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.138905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.138933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.139140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.139166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.139364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.139394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.139596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.139625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.139824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.139853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.140070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.140100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.140294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.140323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.140500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.140527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.140724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.140755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.140981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.141012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.141194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.141221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.141424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.141455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.141682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.141712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.141916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.141944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.142097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.142124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.142280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.142308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.142470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.142498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.142701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.142728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.142881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.142909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.143069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.143099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.143332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.143362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.143604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.143634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.143834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.143861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.144101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.144131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.144303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.144333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.144546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.144572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.144758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.144785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.144993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.145024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.145229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.145256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.145449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.145479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.145714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-14 04:50:36.145741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-14 04:50:36.145930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.145959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.146184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.146214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.146420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.146454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.146640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.146666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.146888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.146932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.147161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.147191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.147413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.147440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.147673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.147699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.147893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.147921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.148141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.148168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.148374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.148404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.148603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.148633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.148831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.148859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.149076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.149107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.149282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.149314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.149517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.149544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.149751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.149782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.149985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.150016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.150247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.150274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.150478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.150509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.150711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.150741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.150973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.151001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.151208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.151235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.151427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.151454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.151647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.151678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.151896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.151927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.152151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.152180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.152384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.152412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.152622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.152652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.152840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.152874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.153059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.153086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.153249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.153276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.153498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.153527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.153764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.153791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.154035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.154063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.154269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.154311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.154520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.154547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.154726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.154757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.154964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.154995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.155231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.155258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-14 04:50:36.155466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-14 04:50:36.155496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.155721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.155751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.155956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.155988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.156203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.156232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.156457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.156487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.156687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.156714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.156923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.156954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.157180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.157209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.157413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.157440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.157624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.157651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.157853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.157892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.158143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.158170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.158351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.158378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.158555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.158581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.158832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.158859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.159079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.159109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.159315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.159345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.159551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.159578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.159763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.159791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.159988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.160020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.160194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.160221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.160416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.160445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.160643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.160673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.160896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.160924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.161122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.161151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.161357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.161386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.161609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.161636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.161877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.161904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.162122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.162166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.162373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.162401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.162621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.162651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.162876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.162903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.163086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.163113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.163317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.163347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.163543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.163573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.163772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.163798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.163967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.163997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.164194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.164221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.164433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.164460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.164644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.164670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.164933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.164961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-14 04:50:36.165142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-14 04:50:36.165171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.165320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.165353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.165580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.165610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.165841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.165879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.166116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.166145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.166376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.166407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.166675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.166701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.166918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.166947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.167153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.167183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.167410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.167437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.167643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.167673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.167879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.167908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.168115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.168143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.168318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.168348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.168542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.168572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.168812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.168839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.169057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.169085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.169288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.169319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.169526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.169553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.169782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.169811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.170082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.170109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.170281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.170308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.170546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.170576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.170854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.170896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.171137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.171164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.171339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.171368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.171572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.171601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.171806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.171834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.171998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.172030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.172209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.172236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.172416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.172442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.172644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.172675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.172880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.172911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.173138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.173165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.173379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.173408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.173634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.173663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.173835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.173862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.174110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.174140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.174419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.174448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.174627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.174655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.174859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-14 04:50:36.174907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-14 04:50:36.175187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.175216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.175443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.175470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.175680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.175711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.175944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.175975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.176161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.176188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.176414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.176444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.176673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.176703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.176938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.176966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.177178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.177208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.177403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.177433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.177631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.177658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.177894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.177922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.178108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.178135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.178337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.178363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.178552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.178578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.178784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.178814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.179027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.179055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.179284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.179314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.179510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.179540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.179727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.179754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.179939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.179967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.180174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.180201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.180382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.180409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.180617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.180647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.180882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.180913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.181121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.181147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.181326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.181354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.181584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.181618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.181798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.181825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.182016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.182043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.182250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.182280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.182479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.182506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.182728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.182757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.183008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-14 04:50:36.183035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-14 04:50:36.183222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.183248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.183461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.183491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.183724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.183750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.183961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.183988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.184197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.184227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.184445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.184474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.184680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.184707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.184916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.184946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.185137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.185167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.185349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.185376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.185558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.185585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.185795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.185825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.186046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.186074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.186232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.186260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.186439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.186466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.186644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.186671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.186850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.186885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.187102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.187132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.187334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.187362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.187571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.187598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.187787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.187814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.188018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.188046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.188289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.188319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.188516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.188545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.188778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.188805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.188970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.188998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.189158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.189185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.189341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.189369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.189551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.189578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.189781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.189811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.190037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.190065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.190300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.190330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.190503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.190533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.190772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.190806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.191013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.191040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.191223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.191250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.191515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.191542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.191779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.191808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.192014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.192045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.192255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.192282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-14 04:50:36.192471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-14 04:50:36.192497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.192724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.192754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.192970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.192997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.193157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.193184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.193386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.193416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.193651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.193677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.193889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.193917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.194103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.194130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.194332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.194359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.194536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.194566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.194766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.194796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.195008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.195035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.195184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.195211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.195412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.195439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.195612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.195638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.195910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.195941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.196140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.196171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.196369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.196396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.196560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.196587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.196855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.196893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.197130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.197157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.197343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.197370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.197528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.197556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.197739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.197766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.197986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.198017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.198246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.198275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.198508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.198535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.198739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.198768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.198998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.199029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.199200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.199228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.199391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.199418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.199657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.199687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.199879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.199907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.200139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.200173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.200377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.200407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.200687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.200713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.200918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.200946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.201152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.201182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.201394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.201422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.201629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.201656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.093 [2024-07-14 04:50:36.201818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.093 [2024-07-14 04:50:36.201845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.093 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.202062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.202090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.202318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.202348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.202556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.202585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.202791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.202817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.203001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.203028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.203212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.203242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.203449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.203477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.203662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.203692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.203885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.203924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.204096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.204124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.204282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.204308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.204489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.204532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.204765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.204792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.205004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.205034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.205257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.205286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.205474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.205501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.205680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.205707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.205886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.205914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.206089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.206116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.206302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.206329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.206531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.206563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.206760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.206787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.207049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.207076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.207237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.207263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.207419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.207446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.207625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.207669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.207837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.207876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.208056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.208083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.208292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.208319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.208503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.208530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.208738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.208765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.208983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.209013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.209196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.209227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.209431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.209458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.209666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.209692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.209877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.209904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.210062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.210090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.210241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.210268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.210415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.210442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.210616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.210642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.094 qpair failed and we were unable to recover it. 00:34:16.094 [2024-07-14 04:50:36.210816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.094 [2024-07-14 04:50:36.210847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.211062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.211092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.211288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.211315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.211503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.211530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.211708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.211734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.211890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.211917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.212116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.212145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.212359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.212387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.212593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.212620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.212804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.212831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.213044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.213072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.213277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.213304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.213486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.213513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.213697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.213723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.213913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.213942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.214107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.214134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.214285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.214313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.214519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.214546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.214725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.214768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.214974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.215004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.215230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.215257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.215462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.215488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.215636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.215663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.215849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.215883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.216102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.216129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.216362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.216389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.216600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.216626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.216815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.216844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.217079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.217106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.217314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.217341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.217523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.217549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.217779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.217808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.218078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.218110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.218324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.218354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.218567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.095 [2024-07-14 04:50:36.218594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.095 qpair failed and we were unable to recover it. 00:34:16.095 [2024-07-14 04:50:36.218802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.218829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.219053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.219081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.219301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.219331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.219541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.219569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.219768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.219798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.220076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.220107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.220315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.220342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.220536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.220563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.220711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.220738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.220915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.220943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.221123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.221150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.221392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.221422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.221625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.221652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.221847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.221881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.222143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.222170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.222352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.222379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.222602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.222631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.222821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.222850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.223049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.223077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.223259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.223287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.223477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.223504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.223684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.223711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.223916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.223945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.224193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.224222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.224422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.224449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.224687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.224717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.224897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.224927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.225125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.225152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.225335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.225361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.225599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.225628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.225835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.225863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.226053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.226080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.226285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.226329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.226533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.226560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.226739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.226765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.226945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.226972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.227153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.227180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.227375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.227409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.227575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.227605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.227833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.227859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.096 [2024-07-14 04:50:36.228027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.096 [2024-07-14 04:50:36.228055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.096 qpair failed and we were unable to recover it. 00:34:16.097 [2024-07-14 04:50:36.228240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.097 [2024-07-14 04:50:36.228267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.097 qpair failed and we were unable to recover it. 00:34:16.097 [2024-07-14 04:50:36.228444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.097 [2024-07-14 04:50:36.228472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.097 qpair failed and we were unable to recover it. 00:34:16.097 [2024-07-14 04:50:36.228756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.097 [2024-07-14 04:50:36.228785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.097 qpair failed and we were unable to recover it. 00:34:16.097 [2024-07-14 04:50:36.229015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.097 [2024-07-14 04:50:36.229045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.097 qpair failed and we were unable to recover it. 00:34:16.097 [2024-07-14 04:50:36.229246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.097 [2024-07-14 04:50:36.229273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.097 qpair failed and we were unable to recover it. 00:34:16.097 [2024-07-14 04:50:36.229456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.097 [2024-07-14 04:50:36.229483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.097 qpair failed and we were unable to recover it. 00:34:16.097 [2024-07-14 04:50:36.229758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.097 [2024-07-14 04:50:36.229787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.097 qpair failed and we were unable to recover it. 00:34:16.097 [2024-07-14 04:50:36.230026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.097 [2024-07-14 04:50:36.230054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.097 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.230230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.230259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.230438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.230467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.230684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.230711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.230890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.230920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.231151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.231178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.231361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.231388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.231566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.231593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.231796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.231823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.232011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.232039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.232219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.232246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.232425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.232452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.232633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.232660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.232860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.232899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.233167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.233194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.233400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.233426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.233641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.233670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.233877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.233907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.234076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.234102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.234290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.234319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.234522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.234551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.234781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.234808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.235081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.235111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.235336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.235366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.235573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.235600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.235787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.235815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.235996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.236023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.236199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.236226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.236408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.236435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.236659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.236694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.236889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.236916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.237122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-14 04:50:36.237148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-14 04:50:36.237303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.237330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.237538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.237565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.237751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.237778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.237988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.238016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.238171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.238198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.238358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.238384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.238566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.238593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.238773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.238800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.238987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.239015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.239215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.239246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.239423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.239450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.239683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.239712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.239938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.239968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.240249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.240275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.240456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.240483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.240664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.240690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.240896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.240923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.241108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.241135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.241362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.241392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.241630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.241657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.241815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.241842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.242010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.242037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.242240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.242267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.242478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.242504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.242820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.242883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.243171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.243199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.243360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.243387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.243567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.243609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.243782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.243808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.243967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.244010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.244213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.244242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.244462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.244489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.244662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.244689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.244847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.244881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.245088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.245115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.245316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.245346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.245557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.245584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.245842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.245879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.246131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.246160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.246344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.246372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-14 04:50:36.246554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-14 04:50:36.246581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.246759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.246786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.246968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.246995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.247258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.247284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.247522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.247551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.247753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.247783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.248011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.248039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.248263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.248293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.248523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.248553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.248773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.248799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.248999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.249029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.249236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.249266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.249489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.249516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.249672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.249699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.249908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.249938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.250144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.250171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.250353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.250380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.250564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.250591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.250774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.250800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.250983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.251011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.251154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.251181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.251375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.251402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.251577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.251603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.251790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.251819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.252027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.252055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.252210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.252237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.252422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.252448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.252630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.252656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.252806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.252833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.253093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.253121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.253304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.253332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.253539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.253569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.253770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.253800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.253998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.254027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.254231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.254257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.254478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.254505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.254709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.254736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.254947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.254982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.255206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.255236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.255462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.255489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.255698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-14 04:50:36.255726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-14 04:50:36.255942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.255973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.256169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.256196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.256433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.256464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.256643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.256672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.256898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.256926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.257111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.257138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.257318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.257346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.257526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.257554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.257758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.257790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.258025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.258056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.258245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.258273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.258429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.258456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.258636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.258665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.258822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.258850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.259126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.259152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.259388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.259418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.259604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.259632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.259793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.259819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.259981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.260009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.260224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.260251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.260460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.260489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.260715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.260744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.260919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.260947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.261164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.261192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.261348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.261375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.261580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.261607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.261765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.261792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.262001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.262033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.262237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.262264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.262470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.262497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.262645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.262672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.262828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.262855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.263068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.263098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.263284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.263314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.263513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.263539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.263739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.263769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.264004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.264034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.264222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.264248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.264434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.264463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.264663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.264691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.264893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-14 04:50:36.264920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-14 04:50:36.265091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.265120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.265298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.265329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.265566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.265592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.265834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.265863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.266097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.266124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.266311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.266338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.266541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.266571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.266772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.266801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.267004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.267032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.267223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.267253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.267476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.267505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.267740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.267767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.267973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.268003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.268237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.268266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.268497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.268524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.268727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.268757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.268943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.268974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.269150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.269177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.269405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.269434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.269657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.269687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.269891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.269920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.270191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.270220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.270449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.270479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.270704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.270731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.270928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.270956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.271189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.271219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.271444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.271470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.271675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.271705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.271908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.271937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.272116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.272144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.272349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.272379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.272580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.272608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.272784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-14 04:50:36.272811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-14 04:50:36.273013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.273042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.273218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.273248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.273476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.273507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.273716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.273745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.273918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.273948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.274167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.274193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.274396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.274425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.274656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.274683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.274888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.274923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.275135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.275165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.275398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.275424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.275615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.275643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.275823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.275854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.276065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.276095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.276300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.276326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.276482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.276508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.276714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.276744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.276980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.277006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.277184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.277213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.277416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.277446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.277647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.277674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.277886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.277925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.278125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.278156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.278361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.278388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.278594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.278623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.278826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.278856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.279085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.279112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.279298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.279329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.279554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.279583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.279766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.279793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.279967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.279998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.280175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.280205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.280381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.280408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.280635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.280665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.280843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.280881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.281112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.281144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.281304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.281331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.281558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.281587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.281800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.281827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.282069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.282099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.384 [2024-07-14 04:50:36.282338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.384 [2024-07-14 04:50:36.282367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.384 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.282540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.282567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.282761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.282795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.283016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.283045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.283269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.283296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.283523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.283553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.283777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.283806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.284030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.284057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.284269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.284299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.284501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.284531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.284709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.284736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.284968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.284997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.285159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.285188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.285376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.285404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.285608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.285637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.285918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.285947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.286182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.286209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.286444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.286473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.286650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.286679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.286888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.286921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.287082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.287109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.287287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.287316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.287524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.287551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.287825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.287853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.288084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.288113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.288346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.288373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.288578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.288607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.288770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.288800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.289000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.289028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.289207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.289241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.289465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.289494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.289724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.289751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.289986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.290015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.290208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.290238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.290463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.290490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.290675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.290702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.290932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.290962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.291190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.291216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.291438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.291468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.291694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.291724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.291953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.291980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.385 qpair failed and we were unable to recover it. 00:34:16.385 [2024-07-14 04:50:36.292213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.385 [2024-07-14 04:50:36.292243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.292481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.292510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.292732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.292759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.292955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.292983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.293213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.293242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.293437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.293464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.293691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.293721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.293942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.293971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.294204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.294231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.294446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.294476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.294755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.294783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.294991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.295018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.295221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.295250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.295449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.295479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.295674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.295701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.295974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.296004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.296236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.296265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.296444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.296471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.296671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.296700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.296902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.296932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.297140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.297167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.297325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.297352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.297546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.297577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.297804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.297831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.298024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.298055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.298218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.298248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.298471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.298497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.298681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.298708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.298889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.298924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.299122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.299150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.299319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.299349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.299543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.299573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.299804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.299831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.300051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.300082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.300306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.300336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.300536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.300564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.300745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.300775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.300971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.301002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.301202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.301229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.301502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.301532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.301757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.301786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.386 qpair failed and we were unable to recover it. 00:34:16.386 [2024-07-14 04:50:36.301994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.386 [2024-07-14 04:50:36.302022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.302238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.302268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.302460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.302489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.302713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.302740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.302923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.302951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.303147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.303176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.303439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.303466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.303652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.303680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.303910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.303940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.304121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.304148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.304421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.304449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.304672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.304701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.304925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.304953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.305134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.305164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.305385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.305415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.305603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.305630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.305814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.305841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.306030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.306058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.306214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.306242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.306472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.306502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.306723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.306753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.306954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.306981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.307188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.307217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.307420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.307449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.307655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.307681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.307909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.307948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.308124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.308153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.308391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.308422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.308634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.308663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.308838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.308887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.309099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.309126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.309288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.309315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.309506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.309535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.309732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.309759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.309960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.387 [2024-07-14 04:50:36.309990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.387 qpair failed and we were unable to recover it. 00:34:16.387 [2024-07-14 04:50:36.310220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.310250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.310486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.310513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.310716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.310745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.310924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.310954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.311157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.311184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.311340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.311384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.311568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.311597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.311799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.311827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.312040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.312070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.312293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.312322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.312503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.312530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.312709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.312735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.312968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.312998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.313200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.313228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.313414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.313440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.313644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.313673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.313902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.313931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.314132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.314163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.314388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.314415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.314626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.314653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.314805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.314832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.315018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.315046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.315229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.315256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.315461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.315487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.315730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.315759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.315936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.315963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.316122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.316148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.316350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.316381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.316562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.316588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.316787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.316816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.317049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.317077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.317234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.317261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.317438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.317469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.317677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.317707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.317909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.317936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.318089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.318116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.318271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.318298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.318478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.318505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.318715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.318744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.318946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.318977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.388 [2024-07-14 04:50:36.319184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.388 [2024-07-14 04:50:36.319211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.388 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.319392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.319418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.319590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.319616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.319795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.319822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.320062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.320093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.320325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.320355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.320587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.320614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.320826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.320855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.321070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.321100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.321302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.321329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.321512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.321539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.321722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.321749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.321954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.321982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.322156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.322186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.322415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.322444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.322653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.322680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.322840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.322873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.323026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.323053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.323205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.323233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.323454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.323483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.323656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.323686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.323891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.323919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.324124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.324153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.324353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.324382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.324613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.324639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.324890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.324920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.325112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.325141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.325323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.325349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.325558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.325584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.325859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.325897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.326102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.326129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.326336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.326363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.326572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.326605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.326836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.326863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.327037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.327063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.327276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.327303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.327500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.327529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.327729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.327758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.327954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.327984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.328164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.328190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.328349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.328375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.389 qpair failed and we were unable to recover it. 00:34:16.389 [2024-07-14 04:50:36.328559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.389 [2024-07-14 04:50:36.328586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.328807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.328836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.329063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.329090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.329326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.329355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.329524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.329552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.329785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.329814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.329996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.330023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.330249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.330277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.330475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.330504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.330820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.330883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.331089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.331115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.331319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.331348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.331579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.331605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.331834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.331863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.332085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.332112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.332270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.332296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.332521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.332550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.332726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.332755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.332938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.332965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.333149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.333175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.333456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.333482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.333678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.333708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.333918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.333946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.334153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.334182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.334356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.334385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.334675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.334730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.334959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.334986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.335170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.335199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.335398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.335428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.335770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.335816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.336055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.336082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.336275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.336307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.336505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.336534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.336708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.336737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.337005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.337031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.337276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.337302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.337526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.337569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.337792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.337821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.338009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.338036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.338194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.338220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.338419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.390 [2024-07-14 04:50:36.338448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.390 qpair failed and we were unable to recover it. 00:34:16.390 [2024-07-14 04:50:36.338779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.338838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.339091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.339128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.339361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.339389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.339594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.339623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.339834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.339862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.340068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.340095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.340292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.340322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.340546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.340575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.340777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.340807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.341037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.341064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.341312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.341341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.341545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.341572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.341756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.341782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.342047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.342075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.342290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.342319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.342584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.342611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.342846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.342883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.343104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.343135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.343360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.343389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.343625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.343654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.343875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.343914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.344085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.344122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.344355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.344384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.344583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.344612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.344836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.344872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.345089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.345125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.345335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.345365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.345560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.345589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.345790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.345819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.346097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.346133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.346364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.346398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.346592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.346621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.346822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.346851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.347065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.347091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.347298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.347327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.347489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.347518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.347757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.347782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.347976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.348003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.348218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.348247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.348446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.348475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.348829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.391 [2024-07-14 04:50:36.348887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.391 qpair failed and we were unable to recover it. 00:34:16.391 [2024-07-14 04:50:36.349092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.349118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.349333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.349359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.349628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.349656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.349885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.349915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.350185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.350211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.350405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.350434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.350612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.350641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.350837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.350872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.351079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.351106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.351279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.351308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.351480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.351508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.351783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.351835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.352081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.352108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.352291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.352319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.352539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.352568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.352768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.352798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.353016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.353043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.353227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.353253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.353420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.353449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.353655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.353681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.353891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.353917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.354121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.354150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.354333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.354362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.354594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.354647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.354850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.354883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.355065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.355093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.355292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.355321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.355692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.355755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.355992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.356019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.356200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.356234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.356435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.356464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.356756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.356809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.357043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.357070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.357314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.357340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.357525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.392 [2024-07-14 04:50:36.357551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.392 qpair failed and we were unable to recover it. 00:34:16.392 [2024-07-14 04:50:36.357749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.357775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.358022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.358048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.358255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.358285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.358513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.358542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.358747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.358777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.358984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.359011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.359221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.359250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.359473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.359502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.359712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.359741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.359979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.360005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.360210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.360239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.360465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.360491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.360691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.360720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.360891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.360918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.361099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.361126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.361305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.361331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.361634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.361660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.361872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.361899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.362106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.362136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.362358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.362387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.362581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.362610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.362795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.362823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.363012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.363039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.363215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.363241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.363424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.363450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.363660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.363686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.363896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.363923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.364105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.364132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.364442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.364470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.364651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.364677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.364908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.364938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.365138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.365167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.365448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.365476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.365642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.365668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.365894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.365928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.366131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.366158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.366362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.366388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.366606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.366633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.366858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.366899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.367104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.367134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.393 [2024-07-14 04:50:36.367364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.393 [2024-07-14 04:50:36.367390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.393 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.367548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.367575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.367750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.367779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.367942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.367972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.368249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.368298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.368506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.368532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.368712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.368738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.368915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.368942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.369183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.369237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.369441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.369467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.369669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.369699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.369879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.369909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.370109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.370139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.370342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.370369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.370597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.370626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.370860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.370896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.371133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.371161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.371364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.371391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.371576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.371602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.371789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.371815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.372041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.372068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.372234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.372261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.372463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.372491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.372768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.372796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.372993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.373023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.373233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.373259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.373489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.373518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.373750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.373776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.373969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.373995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.374211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.374238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.374514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.374543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.374744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.374775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.375008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.375038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.375244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.375269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.375437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.375472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.375677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.375707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.375935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.375962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.376167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.376193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.376391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.376420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.376645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.376674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.376906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.376936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.394 [2024-07-14 04:50:36.377174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.394 [2024-07-14 04:50:36.377199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.394 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.377416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.377445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.377673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.377702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.377874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.377904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.378130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.378156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.378365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.378394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.378588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.378617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.378794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.378823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.379060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.379087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.379361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.379390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.379582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.379612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.379800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.379828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.380015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.380041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.380212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.380241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.380440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.380470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.380882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.380943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.381173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.381199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.381411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.381440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.381613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.381643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.381877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.381906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.382114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.382141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.382341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.382370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.382537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.382566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.382767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.382796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.382996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.383023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.383225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.383254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.383456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.383486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.383730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.383759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.383964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.383991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.384217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.384246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.384456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.384483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.384682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.384710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.384941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.384968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.385130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.385161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.385375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.385404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.385755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.385809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.386008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.386035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.386218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.386244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.386453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.386482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.386698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.386727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.386928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.386954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.395 qpair failed and we were unable to recover it. 00:34:16.395 [2024-07-14 04:50:36.387190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.395 [2024-07-14 04:50:36.387219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.387445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.387474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.387746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.387775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.388016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.388043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.388253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.388282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.388508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.388537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.388773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.388802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.389000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.389027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.389258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.389286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.389522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.389551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.389752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.389778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.389978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.390005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.390207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.390236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.390464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.390493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.390724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.390750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.390931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.390958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.391118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.391144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.391320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.391346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.391503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.391529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.391707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.391734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.391916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.391943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.392185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.392214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.392572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.392634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.392854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.392895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.393108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.393134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.393368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.393397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.393794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.393861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.394070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.394096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.394259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.394285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.394514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.394542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.394705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.394735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.394936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.394963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.395199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.395232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.395436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.395465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.395672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.395701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.396 [2024-07-14 04:50:36.395972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.396 [2024-07-14 04:50:36.395998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.396 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.396207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.396236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.396401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.396430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.396766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.396810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.397039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.397066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.397270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.397299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.397500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.397529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.397755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.397784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.398021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.398047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.398246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.398275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.398495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.398524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.398726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.398755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.398952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.398979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.399190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.399219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.399453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.399482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.399678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.399707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.399894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.399924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.400104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.400133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.400330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.400359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.400579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.400631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.400880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.400907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.401181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.401210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.401415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.401442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.401656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.401706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.401918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.401945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.402138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.402165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.402345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.402372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.402693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.402752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.402953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.402980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.403211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.403240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.403436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.403465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.403825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.403895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.404100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.404126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.404354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.404383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.404604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.404630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.404833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.404862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.405085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.405111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.405314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.405348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.405546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.405576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.405815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.405843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.406086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.406113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.397 qpair failed and we were unable to recover it. 00:34:16.397 [2024-07-14 04:50:36.406308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.397 [2024-07-14 04:50:36.406336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.406538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.406566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.406791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.406820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.407019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.407047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.407251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.407280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.407484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.407511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.407743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.407772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.407990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.408017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.408235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.408264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.408441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.408468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.408800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.408871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.409046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.409071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.409258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.409284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.409443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.409470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.409705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.409731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.409912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.409938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.410116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.410144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.410421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.410450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.410821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.410892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.411101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.411127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.411356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.411384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.411556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.411586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.411794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.411820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.412034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.412061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.412290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.412318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.412538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.412567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.412790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.412819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.413041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.413068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.413312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.413341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.413510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.413540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.413744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.413774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.414015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.414042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.414249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.414278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.414504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.414532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.414697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.414726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.414961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.414988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.415168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.415196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.415434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.415462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.415701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.415726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.415919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.415946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.416162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.398 [2024-07-14 04:50:36.416192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.398 qpair failed and we were unable to recover it. 00:34:16.398 [2024-07-14 04:50:36.416422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.416451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.416798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.416852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.417066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.417094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.417278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.417304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.417454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.417480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.417675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.417704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.417933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.417960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.418189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.418218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.418433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.418461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.418850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.418915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.419146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.419172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.419340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.419369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.419573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.419599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.419774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.419805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.419985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.420012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.420217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.420246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.420470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.420499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.420793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.420818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.420998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.421025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.421236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.421264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.421486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.421515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.421723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.421749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.421959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.421990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.422198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.422229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.422400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.422429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.422708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.422758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.422985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.423012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.423227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.423257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.423459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.423488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.423722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.423748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.423926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.423954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.424153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.424182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.424356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.424384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.424609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.424637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.424835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.424862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.425091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.425120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.425311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.425340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.425555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.425581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.425759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.425785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.425944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.425971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.426166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-07-14 04:50:36.426195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.399 qpair failed and we were unable to recover it. 00:34:16.399 [2024-07-14 04:50:36.426398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.426428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.426656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.426681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.426885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.426915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.427096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.427126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.427301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.427331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.427542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.427568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.427722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.427747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.427975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.428005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.428414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.428462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.428663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.428689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.428851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.428885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.429092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.429134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.429399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.429424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.429607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.429633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.429811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.429836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.430025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.430051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.430232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.430258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.430412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.430438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.430646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.430688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.430888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.430918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.431116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.431145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.431376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.431406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.431640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.431668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.431940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.431965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.432286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.432338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.432539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.432567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.432778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.432807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.433004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.433030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.433189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.433214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.433479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.433504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.433746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.433772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.433956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.433983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.434191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.434221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.434453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.434478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.434640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-07-14 04:50:36.434666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.400 qpair failed and we were unable to recover it. 00:34:16.400 [2024-07-14 04:50:36.434843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.434878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.435113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.435142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.435338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.435363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.435563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.435591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.435817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.435845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.436087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.436116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.436346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.436371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.436593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.436621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.436779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.436808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.436987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.437017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.437246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.437272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.437458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.437488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.437716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.437745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.437985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.438012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.438175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.438201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.438375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.438403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.438604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.438630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.438811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.438837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.439020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.439047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.439257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.439286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.439513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.439542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.439708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.439737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.440003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.440030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.440237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.440266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.440505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.440531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.440722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.440748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.440944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.440975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.441162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.441188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.441393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.441419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.441830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.441902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.442114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.442140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.442326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.442352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.442580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.442608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.442833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.442861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.443069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.443096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.443366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.443394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.443616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.443644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.443849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.443884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.444080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.444106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.444286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.444316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.444519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-07-14 04:50:36.444548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.401 qpair failed and we were unable to recover it. 00:34:16.401 [2024-07-14 04:50:36.444773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.444801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.445065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.445091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.445320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.445350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.445571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.445600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.445795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.445824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.446097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.446122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.446363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.446392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.446625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.446654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.446851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.446886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.447095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.447121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.447330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.447373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.447604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.447633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.447878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.447908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.448086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.448111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.448312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.448340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.448574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.448599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.448830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.448859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.449087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.449113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.449341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.449369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.449589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.449617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.449845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.449880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.450058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.450084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.450287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.450315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.450483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.450512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.450676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.450704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.450975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.451007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.451221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.451250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.451475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.451503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.451742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.451767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.451923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.451950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.452100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.452126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.452396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.452424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.452816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.452877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.453086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.453111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.453347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.453376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.453572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.453600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.453826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.453855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.454095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.454121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.454279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.454304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.454537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.454565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.402 [2024-07-14 04:50:36.454768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.402 [2024-07-14 04:50:36.454796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.402 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.454986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.455014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.455196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.455222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.455432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.455460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.455736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.455794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.455969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.455996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.456224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.456253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.456473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.456498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.456709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.456751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.456931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.456958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.457156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.457185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.457400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.457426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.457615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.457641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.457844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.457877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.458091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.458120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.458317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.458346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.458558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.458584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.458768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.458794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.459016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.459045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.459259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.459288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.459659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.459714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.459942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.459968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.460173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.460202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.460402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.460430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.460626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.460655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.460895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.460926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.461140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.461167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.461407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.461436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.461671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.461697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.461898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.461925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.462163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.462192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.462433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.462459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.462688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.462716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.462956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.462983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.463166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.463196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.463431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.463460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.463715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.463768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.463996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.464023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.464265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.464294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.464495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.464524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.464714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.464742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.403 qpair failed and we were unable to recover it. 00:34:16.403 [2024-07-14 04:50:36.464945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.403 [2024-07-14 04:50:36.464972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.465190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.465218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.465409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.465438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.465747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.465800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.466030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.466056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.466264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.466293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.466512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.466541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.466736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.466764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.466974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.467001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.467178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.467206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.467405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.467430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.467629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.467655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.467844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.467881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.468094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.468120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.468309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.468339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.468704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.468755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.468971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.468999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.469236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.469262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.469448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.469474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.469779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.469846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.470063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.470090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.470271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.470298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.470511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.470540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.470736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.470765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.470988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.471019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.471226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.471255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.471481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.471509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.471711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.471740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.471944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.471971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.472143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.472171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.472372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.472401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.472597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.472625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.472896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.472922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.473166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.473192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.473366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.404 [2024-07-14 04:50:36.473392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.404 qpair failed and we were unable to recover it. 00:34:16.404 [2024-07-14 04:50:36.473742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.473801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.474032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.474059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.474268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.474297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.474499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.474525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.474731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.474757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.474997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.475023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.475197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.475226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.475455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.475481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.475685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.475711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.475893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.475921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.476104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.476130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.476334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.476363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.476684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.476737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.476947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.476973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.477176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.477205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.477431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.477460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.477633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.477662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.477836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.477862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.478097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.478126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.478327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.478357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.478532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.478561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.478765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.478791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.478969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.478996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.479152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.479179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.479355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.479381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.479564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.479590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.479768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.479794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.479997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.480027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.480263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.480289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.480472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.480502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.480709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.480737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.480914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.480944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.481175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.481201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.481388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.481414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.481570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.481596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.481824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.481853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.482090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.482116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.482297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.482324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.482527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.482556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.482752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.482780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.482986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.405 [2024-07-14 04:50:36.483016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.405 qpair failed and we were unable to recover it. 00:34:16.405 [2024-07-14 04:50:36.483198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.483225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.483455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.483483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.483680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.483709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.483916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.483945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.484184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.484209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.484407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.484435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.484668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.484697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.484919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.484948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.485217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.485243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.485453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.485487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.485687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.485716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.485958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.485987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.486217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.486242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.486495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.486524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.486715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.486744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.486947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.486978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.487206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.487232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.487452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.487480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.487678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.487706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.487909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.487940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.488146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.488173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.488402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.488430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.488660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.488688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.488862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.488897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.489083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.489109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.489267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.489294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.489520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.489549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.489746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.489775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.489989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.490020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.490228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.490257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.490474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.490502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.490829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.490893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.491104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.491130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.491290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.491316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.491523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.491549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.491785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.491814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.491992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.492019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.492255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.492283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.492517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.492546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.492748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.492776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.492957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.406 [2024-07-14 04:50:36.492983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.406 qpair failed and we were unable to recover it. 00:34:16.406 [2024-07-14 04:50:36.493210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.493239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.493407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.493438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.493774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.493837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.494068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.494094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.494323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.494351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.494545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.494573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.494755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.494784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.494950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.494977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.495176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.495206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.495400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.495428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.495672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.495720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.495945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.495972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.496183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.496211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.496399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.496427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.496702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.496755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.496988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.497015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.497219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.497252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.497452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.497482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.497652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.497680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.497877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.497907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.498075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.498101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.498277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.498303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.498647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.498705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.498941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.498968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.499182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.499211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.499409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.499438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.499652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.499681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.499912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.499945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.500180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.500210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.500432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.500461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.500708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.500759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.500957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.500983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.501155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.501182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.501378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.501406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.501747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.501792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.502025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.502052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.502225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.502253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.502473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.502502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.502713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.502741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.502949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.502976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-07-14 04:50:36.503201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.407 [2024-07-14 04:50:36.503229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.503463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.503492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.503693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.503719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.503894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.503921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.504125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.504154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.504352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.504380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.504741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.504795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.505012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.505038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.505197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.505224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.505418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.505447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.505677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.505705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.505937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.505963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.506197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.506226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.506467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.506495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.506826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.506886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.507088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.507114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.507351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.507380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.507772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.507833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.508065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.508095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.508280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.508306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.508476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.508505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.508701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.508729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.508932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.508962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.509170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.509195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.509363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.509392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.509614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.509644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.509825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.509854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.510071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.510101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.510259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.510285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.510465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.510494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.510774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.510823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.511048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.511075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.511296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.511325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.511482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.511511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.511712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.511741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.511940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.511966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.512149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.512176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.512405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.512433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.512844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.512912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.513111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.513137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.513361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.513390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-07-14 04:50:36.513627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.408 [2024-07-14 04:50:36.513656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.513845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.513883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.514111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.514137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.514364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.514393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.514595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.514624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.514835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.514863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.515055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.515081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.515316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.515344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.515546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.515575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.515773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.515802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.516045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.516072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.516258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.516287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.516509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.516537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.516734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.516763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.516962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.516990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.517191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.517220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.517443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.517471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.517748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.517799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.518039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.518065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.518301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.518329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.518495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.518524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.518718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.518747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.518950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.518977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.519155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.519184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.519383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.519412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.519617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.519643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.519856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.519895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.520101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.520129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.520324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.520353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.520726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.520784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.521004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.521030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.521210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.521237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.521458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.521484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.521787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.521845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.522086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.522112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.522324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.409 [2024-07-14 04:50:36.522354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-07-14 04:50:36.522580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.522609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.522810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.522838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.523071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.523097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.523300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.523329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.523536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.523567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.523778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.523806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.523991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.524017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.524257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.524282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.524484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.524513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.524735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.524763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.524975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.525002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.525232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.525261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.525460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.525491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.525719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.525747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.525973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.526000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.526181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.526210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.526419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.526447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.526651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.526680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.526907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.526934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.527141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.527170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.527346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.527375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.527589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.527617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.527857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.527892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.528129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.528155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.528337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.528363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.528622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.528649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.528893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.528938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.529139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.529183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.529418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.529460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.529664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.529694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.529915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.529945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.530105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.530132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.530372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.530401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.530806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.530860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.531069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.531096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.531295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.531324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.531525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.531555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.531783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.531812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.532023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.532050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.532283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.532312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.410 [2024-07-14 04:50:36.532517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.410 [2024-07-14 04:50:36.532542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.410 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.532719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.532748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.532950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.532977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.533162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.533188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.533374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.533401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.533558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.533586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.533762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.533788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.533969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.533996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.534172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.534201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.534438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.534463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.534669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.534696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.534906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.534935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.535132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.535161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.535361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.535392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.535598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.535624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.535802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.535831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.536018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.536045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.536212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.536239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.536444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.536470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.536657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.536686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.536913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.536943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.537151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.537177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.537376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.537402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.537631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.537659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.537840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.537877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.538117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.538146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.538342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.538368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.538538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.538564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.538764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.538793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.538994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.539024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.539226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.539252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.539485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.539514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.539736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.539765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.539989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.540019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.540191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.540217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.540394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.540421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.540578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.540622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.540820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.540849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.541049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.541075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.541254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.541281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.541481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.541511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.411 [2024-07-14 04:50:36.541721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.411 [2024-07-14 04:50:36.541748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.411 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.541932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.541959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.542165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.542193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.542401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.542429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.542622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.542652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.542854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.542887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.543052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.543078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.543308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.543337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.543704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.543759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.543957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.543984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.544188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.544219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.544433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.544459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.544688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.544717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.544946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.544973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.545158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.545187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.545390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.545418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.545685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.545715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.545927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.545954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.546147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.546176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.546403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.546432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.546710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.546761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.546973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.546999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.412 [2024-07-14 04:50:36.547205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.412 [2024-07-14 04:50:36.547233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.412 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.547409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.547438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.547742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.547793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.548029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.548055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.548234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.548263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.548464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.548493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.548725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.548774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.548965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.548993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.549175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.549205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.549383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.549412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.549671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.549718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.549920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.549947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.550159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.550187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.692 [2024-07-14 04:50:36.550365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.692 [2024-07-14 04:50:36.550392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.692 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.550626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.550655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.550826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.550852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.551045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.551072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.551274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.551303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.551495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.551524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.551725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.551751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.551956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.551986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.552165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.552195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.552464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.552515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.552719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.552745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.552950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.552979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.553151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.553180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.553445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.553496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.553689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.553715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.553890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.553920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.554141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.554170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.554397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.554426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.554607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.554633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.554842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.554879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.555089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.555115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.555348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.555396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.555603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.555629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.555815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.555841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.556003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.556030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.556224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.556253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.556423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.556449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.556641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.556669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.556899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.556943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.557103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.557129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.557335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.557361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.557533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.557562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.557788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.557816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.557985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.558015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.558214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.558240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.558444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.558473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.558702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.558731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.558967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.558993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.559183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.559210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.559414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.559443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.559614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.559643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.693 qpair failed and we were unable to recover it. 00:34:16.693 [2024-07-14 04:50:36.559849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.693 [2024-07-14 04:50:36.559887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.560065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.560091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.560293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.560321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.560490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.560519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.560743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.560772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.561009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.561036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.561225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.561254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.561465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.561494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.561695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.561723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.561905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.561932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.562094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.562120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.562345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.562374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.562569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.562616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.562794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.562820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.563026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.563055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.563258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.563287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.563508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.563534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.563715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.563741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.563952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.563995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.564154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.564183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.564430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.564480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.564660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.564686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.564861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.564906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.565109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.565137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.565362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.565392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.565567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.565593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.565756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.565784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.565946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.565976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.566177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.566203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.566349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.566375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.566605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.566633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.566828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.566857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.567057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.567085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.567277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.567303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.567475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.567503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.567702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.567731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.567899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.567929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.568109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.568135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.568324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.568350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.568557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.568586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.694 [2024-07-14 04:50:36.568785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.694 [2024-07-14 04:50:36.568814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.694 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.568998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.569025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.569175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.569201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.569384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.569410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.569593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.569623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.569853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.569892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.570076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.570105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.570306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.570335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.570511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.570540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.570720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.570748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.570903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.570930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.571138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.571166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.571432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.571479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.571686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.571713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.571961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.571991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.572172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.572200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.572365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.572391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.572600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.572626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.572877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.572904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.573055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.573081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.573313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.573363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.573541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.573568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.573778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.573808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.574017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.574044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.574200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.574227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.574408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.574434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.574644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.574686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.574852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.574891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.575103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.575129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.575292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.575318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.575524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.575554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.575729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.575759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.575980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.576012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.576200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.576227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.576440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.576471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.576675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.576702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.576892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.576919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.577122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.577148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.577375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.577404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.577598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.577628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.577857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.695 [2024-07-14 04:50:36.577894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.695 qpair failed and we were unable to recover it. 00:34:16.695 [2024-07-14 04:50:36.578084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.578110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.578291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.578317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.578520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.578549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.578755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.578782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.578978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.579005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.579218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.579247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.579443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.579472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.579704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.579730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.579879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.579906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.580091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.580117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.580276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.580302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.580547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.580573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.580722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.580748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.580916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.580944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.581166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.581196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.581400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.581426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.581603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.581629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.581831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.581859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.582067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.582096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.582356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.582407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.582639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.582665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.582882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.582913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.583115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.583144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.583364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.583409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.583639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.583666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.583880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.583910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.584085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.584114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.584337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.584366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.584601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.584627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.584824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.584853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.585101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.585130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.585353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.585399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.585575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.585601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.585761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.585788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.696 [2024-07-14 04:50:36.585947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.696 [2024-07-14 04:50:36.585975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.696 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.586231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.586277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.586463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.586489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.586669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.586695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.586941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.586971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.587161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.587193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.587435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.587461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.587611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.587637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.587827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.587853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.588054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.588085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.588267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.588293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.588494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.588523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.588734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.588762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.588934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.588964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.589144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.589170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.589373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.589402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.589602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.589631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.589850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.589889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.590065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.590096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.590300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.590329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.590497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.590522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.590719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.590748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.590975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.591006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.591211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.591240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.591469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.591515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.591724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.591755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.591915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.591942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.697 [2024-07-14 04:50:36.592098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.697 [2024-07-14 04:50:36.592125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.697 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.592308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.592335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.592544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.592571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.592780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.592809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.593002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.593029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.593185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.593211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.593401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.593427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.593604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.593631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.593816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.593842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.594015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.594042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.594225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.594252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.594502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.594531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.594714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.594744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.594954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.594981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.595137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.595163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.595388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.595437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.595720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.595768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.595977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.596004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.596186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.596214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.596553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.596582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.596785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.596810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.596972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.596998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.597188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.597217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.597646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.597698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.597903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.597930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.598121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.598150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.598408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.598437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.598663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.598689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.598878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.598904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.599094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.599123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.599370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.599399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.599742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.599794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.600001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.600030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.600294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.600322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.600555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.600601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.600805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.600831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.601027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.601058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.601254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.601282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.601473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.601506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.601701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.601727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.601931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.698 [2024-07-14 04:50:36.601961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.698 qpair failed and we were unable to recover it. 00:34:16.698 [2024-07-14 04:50:36.602199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.602228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.602424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.602453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.602676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.602702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.602923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.602952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.603179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.603208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.603433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.603461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.603688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.603713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.603914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.603944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.604175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.604203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.604442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.604470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.604699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.604724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.604887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.604931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.605127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.605155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.605353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.605381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.605576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.605602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.605812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.605838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.606027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.606055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.606279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.606307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.606477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.606503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.606686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.606712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.606935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.606965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.607165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.607194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.607395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.607424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.607589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.607615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.607823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.607849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.608053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.608082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.608312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.608357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.608596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.608621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.608834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.608860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.609062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.609091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.609310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.609357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.609785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.609839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.610110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.610140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.610341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.610369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.610623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.610670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.610906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.610932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.611131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.611159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.611370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.611404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.611588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.611614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.611795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.611822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.612073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.699 [2024-07-14 04:50:36.612102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.699 qpair failed and we were unable to recover it. 00:34:16.699 [2024-07-14 04:50:36.612343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.612371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.612615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.612660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.612875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.612901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.613083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.613112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.613290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.613315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.613493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.613518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.613727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.613752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.613926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.613955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.614154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.614182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.614462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.614508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.614735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.614760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.614963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.614992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.615182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.615210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.615412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.615464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.615695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.615721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.615920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.615949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.616142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.616171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.616438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.616484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.616712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.616738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.616949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.616979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.617187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.617216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.617377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.617406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.617578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.617605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.617773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.617799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.617976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.618005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.618208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.618237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.618517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.618564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.618799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.618825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.619038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.619068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.619378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.619426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.619830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.619902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.620120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.620149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.620344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.620373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.620574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.620622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.620856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.620890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.621075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.621101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.621313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.621343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.621526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.621552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.621703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.621730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.621962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.700 [2024-07-14 04:50:36.621992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.700 qpair failed and we were unable to recover it. 00:34:16.700 [2024-07-14 04:50:36.622175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.622204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.622411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.622436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.622645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.622670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.622847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.622879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.623037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.623063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.623287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.623333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.623558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.623583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.623764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.623789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.623968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.623997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.624179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.624207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.624435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.624464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.624658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.624684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.624910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.624940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.625146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.625187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.625372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.625398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.625576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.625602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.625788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.625814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.625994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.626024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.626324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.626372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.626621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.626651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.626876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.626902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.627103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.627131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.627379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.627426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.627635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.627666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.627871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.627898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.628116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.628144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.628428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.628474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.628700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.628726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.628904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.628930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.629134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.629159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.629339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.629365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.629564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.629589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.629790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.629816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.629968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.629994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.630203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.701 [2024-07-14 04:50:36.630229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.701 qpair failed and we were unable to recover it. 00:34:16.701 [2024-07-14 04:50:36.630389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.630415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.630594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.630623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.630826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.630851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.631017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.631042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.631225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.631251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.631428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.631453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.631637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.631662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.631841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.631873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.632077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.632103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.632285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.632311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.632493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.632519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.632707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.632732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.632917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.632942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.633088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.633113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.633297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.633323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.633537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.633562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.633741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.633766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.633916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.633943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.634131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.634156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.634340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.634365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.634514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.634539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.634722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.634747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.634927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.634953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.635143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.635168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.635373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.635398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.635547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.635572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.635726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.635751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.635910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.635937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.636122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.636149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.636333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.636359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.636549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.636575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.636784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.636810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.636960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.636987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.637200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.637226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.637404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.637430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.637613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.637638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.637816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.637842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.637996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.638022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.638184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.638210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.638390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.638417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.702 [2024-07-14 04:50:36.638579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.702 [2024-07-14 04:50:36.638604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.702 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.638787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.638816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.638964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.638990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.639175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.639201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.639416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.639441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.639620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.639646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.639798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.639824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.639988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.640015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.640205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.640230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.640405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.640431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.640574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.640601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.640817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.640842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.641038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.641065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.641245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.641271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.641428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.641453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.641671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.641696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.641853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.641892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.642105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.642130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.642305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.642330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.642481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.642506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.642715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.642740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.642923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.642949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.643136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.643161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.643340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.643365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.643548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.643574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.643750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.643776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.643949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.643975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.644189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.644215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.644391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.644417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.644623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.644648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.644796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.644822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.645010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.645036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.645184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.645209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.645387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.645412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.645592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.645618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.645828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.645854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.646017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.646043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.646226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.646251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.646459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.646485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.646642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.646668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.646846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.646878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.703 [2024-07-14 04:50:36.647057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.703 [2024-07-14 04:50:36.647087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.703 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.647267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.647292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.647456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.647481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.647630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.647655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.647826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.647851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.648035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.648060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.648244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.648270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.648478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.648504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.648687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.648713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.648894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.648919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.649102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.649128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.649305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.649331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.649483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.649508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.649689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.649714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.649896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.649921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.650104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.650130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.650309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.650334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.650515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.650541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.650728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.650755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.650948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.650974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.651152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.651178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.651356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.651382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.651559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.651585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.651762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.651786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.651969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.651995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.652177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.652202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.652390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.652415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.652601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.652625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.652816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.652841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.653032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.653059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.653213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.653238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.653425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.653450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.653652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.653677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.653855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.653897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.654077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.654103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.654257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.654282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.654454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.654480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.654655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.654680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.654886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.654913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.655062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.655087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.655282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 04:50:36.655308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.704 qpair failed and we were unable to recover it. 00:34:16.704 [2024-07-14 04:50:36.655491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.655517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.655700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.655725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.655913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.655938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.656119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.656145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.656353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.656379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.656556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.656581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.656735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.656761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.656931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.656957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.657167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.657193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.657376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.657401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.657608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.657634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.657813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.657838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.658055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.658081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.658267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.658291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.658497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.658522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.658699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.658724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.658885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.658913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.659095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.659119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.659300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.659325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.659505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.659531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.659708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.659733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.659926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.659952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.660129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.660154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.660358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.660383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.660563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.660587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.660769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.660795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.660950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.660980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.661182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.661208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.661388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.661415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.661575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.661600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.661790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.661816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.662025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.662051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.662235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.662259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.662445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 04:50:36.662471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.705 qpair failed and we were unable to recover it. 00:34:16.705 [2024-07-14 04:50:36.662649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.662675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.662826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.662852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.663044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.663069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.663231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.663256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.663459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.663484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.663693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.663718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.663882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.663909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.664126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.664152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.664329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.664354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.664558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.664583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.664767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.664791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.664982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.665008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.665227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.665252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.665458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.665484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.665691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.665716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.665899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.665926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.666109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.666134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.666289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.666314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.666522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.666548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.666762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.666788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.666936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.666962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.667146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.667172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.667355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.667380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.667584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.667608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.667799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.667825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.668039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.668066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.668274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.668300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.668489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.668514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.668666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.668692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.668882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.668909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.669094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.669119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.669323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.669349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.669509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.669539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.669719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.669745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.669938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.669965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.670186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.670212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.670394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.670419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.670596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.670622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.670801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.670827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.671048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.671075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.706 [2024-07-14 04:50:36.671225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 04:50:36.671249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.706 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.671459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.671485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.671637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.671664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.671844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.671875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.672083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.672108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.672291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.672316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.672532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.672558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.672742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.672766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.672975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.673002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.673206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.673232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.673392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.673417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.673595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.673620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.673797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.673821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.674003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.674029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.674177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.674203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.674387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.674411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.674621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.674645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.674860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.674890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.675039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.675065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.675279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.675304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.675488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.675514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.675722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.675747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.675903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.675929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.676106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.676132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.676313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.676338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.676518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.676544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.676708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.676734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.676922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.676948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.677101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.677126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.677334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.677359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.677538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.677563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.677722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.677747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.677931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.677961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.678138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.678163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.678342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.678368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.678576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.678602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.678781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.678807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.678983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.679008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.679191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.679217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.679368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.679393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.679571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.679597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.707 [2024-07-14 04:50:36.679777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.707 [2024-07-14 04:50:36.679802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.707 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.679976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.680001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.680153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.680178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.680389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.680414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.680592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.680616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.680767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.680791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.680977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.681004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.681188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.681214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.681418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.681443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.681627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.681652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.681833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.681857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.682041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.682066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.682227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.682252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.682428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.682453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.682655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.682679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.682887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.682912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.683092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.683117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.683303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.683329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.683517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.683543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.683745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.683771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.683951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.683977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.684171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.684196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.684356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.684381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.684531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.684556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.684756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.684781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.684962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.684989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.685180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.685204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.685384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.685409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.685611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.685637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.685819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.685844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.686016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.686041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.686226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.686255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.686430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.686455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.686637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.686661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.686874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.686898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.687080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.687106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.687310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.687336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.687495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.687520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.687700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.687724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.687909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.687934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.688118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.688144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.708 qpair failed and we were unable to recover it. 00:34:16.708 [2024-07-14 04:50:36.688298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.708 [2024-07-14 04:50:36.688325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.688510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.688535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.688693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.688718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.688902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.688927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.689082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.689107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.689313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.689338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.689515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.689540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.689717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.689742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.689921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.689947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.690133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.690159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.690374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.690399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.690544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.690569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.690727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.690753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.690910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.690937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.691116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.691142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.691347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.691373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.691528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.691552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.691714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.691739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.691896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.691922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.692079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.692104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.692309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.692334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.692513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.692538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.692712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.692738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.692910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.692936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.693114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.693139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.693344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.693369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.693572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.693596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.693778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.693803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.694018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.694044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.694207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.694232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.694441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.694471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.694628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.694653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.694844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.694875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.695040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.695065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.695271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.695296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.695502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.695527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.695714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.695739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.695926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.695952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.696158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.696183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.696342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.696367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.696546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.709 [2024-07-14 04:50:36.696572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.709 qpair failed and we were unable to recover it. 00:34:16.709 [2024-07-14 04:50:36.696780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.696805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.696993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.697020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.697226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.697251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.697442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.697467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.697676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.697700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.697872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.697898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.698071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.698096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.698280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.698305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.698489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.698515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.698675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.698700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.698902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.698927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.699107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.699132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.699318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.699343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.699527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.699552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.699762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.699787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.699968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.699997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.700166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.700190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.700399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.700423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.700630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.700655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.700813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.700838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.701052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.701077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.701257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.701282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.701442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.701466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.701656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.701681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.701862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.701896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.702081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.702106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.702256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.702281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.702468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.702492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.702679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.702704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.702857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.702902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.703113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.703139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.703293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.703319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.703499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.703524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.703714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.703741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.703953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.710 [2024-07-14 04:50:36.703979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.710 qpair failed and we were unable to recover it. 00:34:16.710 [2024-07-14 04:50:36.704166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.704191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.704406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.704431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.704612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.704637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.704792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.704817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.704996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.705023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.705202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.705227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.705383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.705408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.705592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.705618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.705803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.705829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.706051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.706077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.706263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.706288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.706470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.706495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.706646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.706670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.706879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.706904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.707081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.707105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.707283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.707308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.707514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.707539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.707687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.707712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.707877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.707903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.708085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.708110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.708316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.708342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.708529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.708555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.708735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.708761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.708913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.708940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.709147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.709172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.709329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.709355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.709533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.709558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.709734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.709760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.709972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.709998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.710148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.710172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.710350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.710376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.710536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.710561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.710711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.710737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.710917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.710942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.711095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.711125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.711280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.711307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.711510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.711535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.711681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.711706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.711886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.711912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.712096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.712121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.712301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.711 [2024-07-14 04:50:36.712326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.711 qpair failed and we were unable to recover it. 00:34:16.711 [2024-07-14 04:50:36.712508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.712533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.712719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.712744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.712951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.712977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.713129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.713154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.713341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.713366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.713550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.713575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.713757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.713781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.713944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.713971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.714159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.714184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.714362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.714389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.714546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.714571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.714749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.714773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.714965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.714991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.715209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.715234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.715385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.715411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.715599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.715625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.715810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.715834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.716065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.716091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.716244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.716268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.716446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.716472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.716640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.716666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.716820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.716844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.717030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.717056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.717237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.717262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.717442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.717468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.717653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.717678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.717881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.717907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.718061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.718086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.718238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.718265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.718471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.718497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.718701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.718725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.718930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.718955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.719117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.719142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.719350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.719379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.719559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.719586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.719760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.719784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.719965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.719991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.720194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.720218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.720391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.720416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.720572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.720597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.712 [2024-07-14 04:50:36.720752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.712 [2024-07-14 04:50:36.720777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.712 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.720979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.721005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.721178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.721203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.721376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.721401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.721574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.721599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.721777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.721802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.721986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.722011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.722220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.722246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.722462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.722487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.722651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.722676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.722888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.722914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.723099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.723123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.723309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.723334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.723530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.723555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.723701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.723725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.723905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.723938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.724147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.724173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.724348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.724373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.724550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.724574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.724733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.724759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.724952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.724978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.725179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.725204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.725375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.725399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.725579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.725603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.725811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.725837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.726051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.726077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.726256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.726281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.726442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.726466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.726650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.726675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.726829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.726854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.727046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.727071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.727245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.727270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.727428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.727453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.727658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.727687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.727846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.727879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.728033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.728059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.728212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.728237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.728422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.728447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.728651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.728676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.728832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.728857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.729068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.729095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.713 [2024-07-14 04:50:36.729295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.713 [2024-07-14 04:50:36.729321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.713 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.729476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.729501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.729680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.729705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.729889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.729914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.730069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.730094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.730293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.730318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.730474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.730499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.730684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.730709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.730859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.730891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.731068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.731094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.731271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.731296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.731477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.731503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.731654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.731678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.731856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.731887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.732073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.732098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.732256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.732282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.732457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.732482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.732656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.732681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.732835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.732860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.733051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.733077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.733259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.733284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.733462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.733488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.733642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.733666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.733847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.733880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.734097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.734123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.734300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.734327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.734510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.734535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.734740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.734765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.734918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.734944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.735122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.735147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.735294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.735319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.735500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.735524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.735700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.735729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.735907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.735933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.736138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.736164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.714 [2024-07-14 04:50:36.736348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.714 [2024-07-14 04:50:36.736373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.714 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.736549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.736574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.736750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.736775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.736929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.736955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.737116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.737143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.737351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.737376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.737580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.737606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.737780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.737805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.738023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.738049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.738206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.738231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.738383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.738409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.738570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.738595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.738791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.738816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.738993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.739020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.739167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.739193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.739379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.739404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.739584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.739609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.739797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.739821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.739974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.739999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.740208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.740233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.740419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.740444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.740599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.740625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.740832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.740857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.741070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.741094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.741304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.741329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.741472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.741498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.741655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.741679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.741888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.741913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.742097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.742122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.742301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.742327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.742502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.742528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.742676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.742701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.742882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.742907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.743084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.743109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.743261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.743286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.743469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.743495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.743679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.743705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.743880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.743910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.744087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.744113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.744259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.744284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.744460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.715 [2024-07-14 04:50:36.744486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.715 qpair failed and we were unable to recover it. 00:34:16.715 [2024-07-14 04:50:36.744693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.744719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.744904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.744931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.745088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.745113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.745273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.745297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.745454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.745478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.745658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.745683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.745837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.745862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.746019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.746044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.746216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.746242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.746425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.746451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.746638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.746664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.746849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.746882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.747085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.747110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.747269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.747294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.747453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.747478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.747655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.747680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.747864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.747896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.748054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.748080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.748263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.748288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.748461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.748487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.748687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.748713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.748862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.748895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.749079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.749103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.749293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.749318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.749478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.749505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.749719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.749744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.749898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.749924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.750111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.750135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.750340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.750365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.750540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.750565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.750771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.750797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.750976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.751003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.751210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.751236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.751422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.751447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.751633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.751657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.751812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.751836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.751992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.752023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.752230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.752255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.752461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.752486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.752646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.752671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.752821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.716 [2024-07-14 04:50:36.752846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.716 qpair failed and we were unable to recover it. 00:34:16.716 [2024-07-14 04:50:36.753030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.753056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.753250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.753276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.753426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.753450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.753608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.753633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.753806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.753832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.754043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.754070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.754223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.754248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.754399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.754424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.754628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.754654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.754813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.754838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.755025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.755051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.755241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.755265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.755447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.755472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.755657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.755683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.755892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.755918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.756096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.756122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.756303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.756328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.756509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.756533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.756717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.756743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.756924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.756951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.757107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.757132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.757307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.757331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.757541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.757582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.757799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.757828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.758029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.758057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.758298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.758341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.758578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.758621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.758810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.758836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.759027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.759053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.759285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.759328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.759505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.759548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.759728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.759753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.759934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.759962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.760141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.760184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.760367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.760411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.760567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.760593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.760780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.760806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.761012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.761056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.761267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.761310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.761528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.761570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.761754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.717 [2024-07-14 04:50:36.761782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-14 04:50:36.761962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.762005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.762217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.762260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.762465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.762508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.762692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.762718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.762917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.762946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.763174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.763217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.763446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.763489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.763672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.763698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.763916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.763942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.764121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.764163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.764406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.764449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.764672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.764700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.764918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.764948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.765170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.765212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.765450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.765493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.765732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.765775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.765983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.766027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.766240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.766267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.766477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.766521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.766706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.766732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.766960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.767004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.767208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.767255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.767456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.767498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.767677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.767703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.767885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.767911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.768126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.768152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.768361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.768404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.768644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.768687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.768896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.768922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.769124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.769167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.769401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.769444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.769650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.769692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.769879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.769906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.770088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.770114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.770352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.718 [2024-07-14 04:50:36.770394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-14 04:50:36.770577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.770623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.770808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.770834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.771002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.771029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.771231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.771274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.771478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.771521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.771723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.771748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.771924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.771955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.772157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.772200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.772434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.772477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.772659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.772684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.772835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.772862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.773102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.773146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.773383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.773425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.773634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.773678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.773827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.773854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.774077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.774120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.774349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.774392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.774573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.774599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.774757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.774782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.774978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.775021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.775230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.775259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.775476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.775521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.775727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.775753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.775959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.776003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.776219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.776246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.776478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.776521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.776706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.776735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.776948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.776975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.777157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.777183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.777369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.777395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.777595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.777637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.777819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.777845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a0000b90 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.778080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.778126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.778361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.778390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.778616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.778645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.778850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.778888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.779093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.779118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.779316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.779344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.779539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.779567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.779758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.779786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-14 04:50:36.780004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.719 [2024-07-14 04:50:36.780031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.780258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.780287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.780462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.780490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.780716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.780744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.780946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.780972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.781174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.781202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.781367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.781395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.781561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.781589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.781851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.781892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.782120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.782162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.782363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.782391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.782618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.782645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.782838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.782872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.783077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.783107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.783303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.783328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.783530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.783558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.783755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.783783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.784011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.784037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.784243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.784271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.784466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.784494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.784715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.784743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.784933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.784959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.785146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.785171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.785319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.785345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.785576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.785604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.785831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.785859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.786044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.786072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.786305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.786333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.786529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.786554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.786760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.786788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.786994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.787020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.787256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.787285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.787486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.787511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.787690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.787718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.787944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.787973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.788204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.788232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.788430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.788455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.788684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.788712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.788920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.788949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.789150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.789178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.789371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.789400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.720 [2024-07-14 04:50:36.789600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.720 [2024-07-14 04:50:36.789629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.720 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.789806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.789834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.790076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.790106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.790336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.790362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.790567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.790595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.790785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.790813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.791020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.791049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.791221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.791246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.791442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.791469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.791632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.791660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.791836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.791871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.792081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.792106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.792309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.792337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.792511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.792539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.792737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.792765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.792938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.792965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.793111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.793136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.793369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.793397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.793588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.793616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.793820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.793846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.794062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.794092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.794320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.794348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.794552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.794581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.794812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.794838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.795033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.795062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.795231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.795259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.795486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.795518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.795731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.795756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.795962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.795991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.796170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.796198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.796394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.796422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.796645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.796670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.796841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.796876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.797043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.797072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.797270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.797298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.797475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.797500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.797696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.797724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.797900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.797929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.798123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.798151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.798326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.798352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.798585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.798614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.721 qpair failed and we were unable to recover it. 00:34:16.721 [2024-07-14 04:50:36.798842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.721 [2024-07-14 04:50:36.798881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.799058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.799086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.799294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.799319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.799545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.799574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.799773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.799801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.800000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.800026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.800205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.800230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.800437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.800465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.800661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.800686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.800918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.800947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.801128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.801153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.801333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.801358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.801539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.801568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.801791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.801817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.801999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.802026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.802187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.802212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.802368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.802393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.802638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.802666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.802838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.802863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.803045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.803073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.803304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.803331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.803530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.803558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.803737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.803764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.803990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.804018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.804266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.804294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.804463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.804491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.804685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.804714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.804921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.804951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.805177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.805205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.805414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.805442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.805647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.805672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.805850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.805886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.806048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.806074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.806288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.806328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.806546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.806572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.806803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.806833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.807037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.807063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.807275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.807320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.807550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.807574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.722 qpair failed and we were unable to recover it. 00:34:16.722 [2024-07-14 04:50:36.807788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.722 [2024-07-14 04:50:36.807817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.808007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.808033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.808206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.808236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.808439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.808465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.808696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.808724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.808916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.808945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.809152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.809177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.809379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.809404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.809585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.809610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.809814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.809842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.810061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.810087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.810295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.810320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.810523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.810552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.810757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.810782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.810985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.811019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.811226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.811251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.811450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.811478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.811678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.811706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.811909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.811938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.812143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.812168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.812324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.812349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.812505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.812530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.812683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.812708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.812891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.812917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.813117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.813145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.813369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.813397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.813602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.813631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.813835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.813861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.814082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.814111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.814283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.814312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.814480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.814508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.814716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.814741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.814900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.814925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.815076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.815102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.815254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.815279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.815481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.815506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.815685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.723 [2024-07-14 04:50:36.815713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.723 qpair failed and we were unable to recover it. 00:34:16.723 [2024-07-14 04:50:36.815913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.815942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.816136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.816164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.816370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.816395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.816575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.816603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.816825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.816854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.817042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.817070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.817257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.817282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.817454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.817482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.817682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.817712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.817939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.817969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.818153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.818178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.818385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.818413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.818638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.818663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.818820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.818846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.819035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.819062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.819259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.819287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.819454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.819482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.819683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.819711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.819942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.819967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.820145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.820172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.820370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.820398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.820593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.820620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.820844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.820873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.821065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.821093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.821267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.821295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.821519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.821544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.821747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.821772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.821953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.821983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.822184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.822212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.822443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.822468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.822651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.822676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.822856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.822896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.823085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.823113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.823339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.823367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.823535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.823560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.823786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.823814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.824023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.824049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.824226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.824254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.824449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.824474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.824671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.824699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.824893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.724 [2024-07-14 04:50:36.824918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.724 qpair failed and we were unable to recover it. 00:34:16.724 [2024-07-14 04:50:36.825124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.825165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.825351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.825377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.825584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.825613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.825784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.825811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.826018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.826048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.826232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.826258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.826440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.826465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.826635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.826663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.826878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.826904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.827081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.827106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.827281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.827308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.827502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.827530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.827727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.827755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.827961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.827996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.828173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.828202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.828431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.828458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.828631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.828659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.828857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.828887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.829074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.829099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.829304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.829332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.829534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.829562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.829791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.829816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.830035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.830062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.830220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.830245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.830447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.830475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.830701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.830726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.830900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.830929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.831149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.831177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.831402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.831430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.831630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.831655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.831808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.831833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.832041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.832074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.832243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.832271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.832442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.832467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.832637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.832665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.832837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.832869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.833073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.833101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.833299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.833324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.833532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.833560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.833757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.833785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.833978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.725 [2024-07-14 04:50:36.834005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.725 qpair failed and we were unable to recover it. 00:34:16.725 [2024-07-14 04:50:36.834213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.834239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.834435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.834463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.834630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.834658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.834861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.834897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.835076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.835101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.835248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.835273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.835476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.835504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.835706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.835735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.835933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.835959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.836159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.836187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.836384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.836412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.836602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.836631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.836820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.836844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.837084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.837113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.837301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.837327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.837524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.837552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.837727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.837752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.837977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.838010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.838183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.838212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.838416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.838444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.838616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.838641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.838843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.838886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.839126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.839154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.839355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.839383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.839577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.839603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.839777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.839804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.840001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.840030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.840242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.840268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.840457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.840482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.840713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.840741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.840954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.840980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.841216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.841244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.841460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.841485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.841667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.841695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.841890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.841919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.842118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.842147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.842344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.842369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.842538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.842566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.842743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.842771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.842967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.842996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.843179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.843204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.726 qpair failed and we were unable to recover it. 00:34:16.726 [2024-07-14 04:50:36.843384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.726 [2024-07-14 04:50:36.843409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.843619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.843647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.843813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.843841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.844049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.844078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.844274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.844302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.844498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.844523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.844697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.844722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.844901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.844927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.845074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.845099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.845309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.845336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.845568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.845593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.845739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.845765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.845995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.846025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.846259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.846285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.846512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.846540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.846740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.846765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.846942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.846970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.847189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.847215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.847426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.847454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.847658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.847683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.847837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.847861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.848041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.848066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.848306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.848334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.848569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.848594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.848825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.848853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.849031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.849059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.849271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.849299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.849505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.849530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.849678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.849702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.849920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.849947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.850129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.850154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.850360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.850385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.850588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.850616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.850839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.850871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.851109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.851137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.727 [2024-07-14 04:50:36.851319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.727 [2024-07-14 04:50:36.851344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.727 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.851546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.851571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.851752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.851783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.851998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.852024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.852229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.852254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.852439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.852467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.852662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.852689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.852890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.852919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.853092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.853117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.853326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.853358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.853594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.853619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.853802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.853827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.854058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.854085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.854277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.854302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.854499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.854527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.854734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.854762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.854944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.854970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.855124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.855169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.855361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.855389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.855618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.855646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.855853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.855886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.856069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.856096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.856304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.856332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.856562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.856590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.856763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.856788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.856973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.856999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.857179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.857204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.857379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.857407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.857636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.857661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.857878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.857911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.858087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.858115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.858338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.858368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.858596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.858633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.858847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.858893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.859106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.859132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.859317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.859343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.859504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.859534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.859764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.859793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.859996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.860025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.860209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.860234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.860450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.860475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.860648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.728 [2024-07-14 04:50:36.860677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.728 qpair failed and we were unable to recover it. 00:34:16.728 [2024-07-14 04:50:36.860856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.860900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.861109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.861138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.861314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.861340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.861497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.861549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.861760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.861789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.861971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.862000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.862231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.862256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.862430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.862458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.862634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.862664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.862850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.862893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:16.729 [2024-07-14 04:50:36.863108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.729 [2024-07-14 04:50:36.863135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:16.729 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.863375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.863401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.863553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.863581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.863786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.863812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.863997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.864023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.864178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.864204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.864378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.864406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.864634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.864665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.864909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.864946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.865166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.865195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.865431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.865461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-14 04:50:36.865630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-14 04:50:36.865663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.865906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.865935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.866118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.866146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.866346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.866380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.866603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.866632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.866838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.866880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.867067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.867096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.867291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.867320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.867551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.867586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.867821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.867847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.868067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.868095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.868304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.868330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.868559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.868596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.868825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.868851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.869080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.869108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.869315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.869340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.869503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.869529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.869706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.869732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.869943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.869978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.870138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.870167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.870370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.870399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.870573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.870598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.870797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.870825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.871049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.871076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.871297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.871339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.871509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.871534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.871728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.871756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.871930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.871959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-14 04:50:36.872140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-14 04:50:36.872168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.872376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.872401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.872557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.872582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.872789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.872814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.873043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.873069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.873250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.873275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.873487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.873515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.873715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.873743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.873923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.873953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.874155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.874180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.874362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.874387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.874537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.874562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.874745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.874770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.874952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.874978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.875148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.875176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.875399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.875427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.875628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.875653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.875858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.875888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.876087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.876115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.876314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.876342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.876535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.876563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.876796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.876821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.877049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.877077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.877302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.877330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.877559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.877583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.877786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.877811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.878043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.878073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.878311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-14 04:50:36.878337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-14 04:50:36.878566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.878594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.878780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.878805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.879008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.879037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.879246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.879274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.879443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.879470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.879673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.879699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.879923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.879952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.880169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.880194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.880425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.880453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.880658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.880684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.880872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.880900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.881092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.881120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.881325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.881357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.881554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.881579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.881811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.881839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.882065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.882091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.882296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.882324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.882506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.882531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.882719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.882744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.882942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.882971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.883205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.883231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.883434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.883459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.883664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.883692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.883898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.883924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.884110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.884135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.884403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.884428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.884635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.884663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-14 04:50:36.884842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-14 04:50:36.884875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.885105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.885133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.885335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.885360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.885558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.885585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.885748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.885776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.885990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.886019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.886193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.886218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.886397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.886422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.886605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.886630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.886841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.886881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.887110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.887135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.887372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.887400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.887598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.887632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.887822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.887850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.888062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.888088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.888310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.888338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.888537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.888565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.888787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.888815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.889045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.889070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.889245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.889273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.889494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.889522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.889722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.889750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.889949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.889976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.890178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.890206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.890381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.890408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.890604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.890632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.890839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.890864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.891048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-14 04:50:36.891076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-14 04:50:36.891268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.891296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.891497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.891525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.891720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.891746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.891923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.891951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.892152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.892180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.892384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.892412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.892609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.892635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.892785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.892810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.893013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.893043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.893220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.893248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.893445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.893470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.893654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.893679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.893889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.893919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.894125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.894151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.894333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.894358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.894543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.894569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.894776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.894802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.895033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.895062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.895269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.895294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.895493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.895519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.895722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.895749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.895957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.895983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.896188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.896213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.896417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.896445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.896653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.896678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.896864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.896894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.897165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.897190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-14 04:50:36.897396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-14 04:50:36.897423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.897630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.897657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.897855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.897897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.898095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.898120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.898325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.898353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.898534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.898562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.898929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.898958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.899194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.899219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.899428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.899456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.899617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.899645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.899842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.899876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.900085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.900110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.900319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.900347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.900574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.900603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.900838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.900873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.901100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.901125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.901332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.901359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.901561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.901586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.901798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.901824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.901995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.902022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.902222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.902250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.902423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.902450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.902679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-14 04:50:36.902704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-14 04:50:36.902913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.902939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.903146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.903174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.903369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.903401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.903593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.903620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.903820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.903845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.904054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.904082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.904326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.904354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.904581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.904610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.904812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.904837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.905072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.905101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.905262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.905291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.905515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.905542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.905779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.905804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.905994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.906023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.906188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.906217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.906419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.906447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.906624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.906649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.906839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.906883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.907092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.907120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.907350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.907375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.907557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.907582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.907810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.907837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.908090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.908116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.908324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.908352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.908561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.908586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-14 04:50:36.908739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-14 04:50:36.908764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.908977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.909006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.909236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.909264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.909439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.909464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.909687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.909720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.909930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.909959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.910173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.910198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.910343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.910368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.910578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.910606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.910781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.910811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.911010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.911039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.911242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.911267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.911429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.911457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.911662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.911690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.911864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.911898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.912097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.912122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.912325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.912352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.912547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.912575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.912754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.912782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.912985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.913011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.913187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.913215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.913376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.913404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.913609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.913637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.913877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.913906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.914130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.914155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.914388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.914416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.914623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.914651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.914849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.914883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-14 04:50:36.915081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-14 04:50:36.915109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.915308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.915333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.915525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.915553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.915751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.915782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.915961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.915987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.916169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.916195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.916425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.916452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.916663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.916688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.916876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.916906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.917105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.917133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.917341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.917366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.917524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.917548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.917743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.917771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.917936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.917967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.918198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.918226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.918430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.918456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.918639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.918664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.918837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.918870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.919063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.919089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.919299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.919324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.919499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.919527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.919727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.919752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.919930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.919955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.920133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.920159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.920363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.920393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.920588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.920616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.920841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.920874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.921082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.921107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-14 04:50:36.921309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-14 04:50:36.921337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.921540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.921567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.921778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.921808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.922043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.922070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.922286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.922314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.922483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.922525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.922730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.922771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.922978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.923006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.923203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.923231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.923457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.923485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.923716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.923742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.923923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.923949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.924125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.924153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.924352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.924377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.924559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.924584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.924768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.924793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.924998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.925027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.925198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.925226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.925421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.925449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.925672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.925697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.925904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.925934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.926103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.926131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.926352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.926380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.926612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.926638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.926840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.926874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.927077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.927102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.927315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.927356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.927567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-14 04:50:36.927594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-14 04:50:36.927783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.927819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2948951 Killed "${NVMF_APP[@]}" "$@" 00:34:17.021 [2024-07-14 04:50:36.928040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.928080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.928304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.928341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.021 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:17.021 [2024-07-14 04:50:36.928570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.928606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:17.021 [2024-07-14 04:50:36.928847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.928892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:17.021 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.021 [2024-07-14 04:50:36.929141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.929184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb570 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.929427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.929473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.929681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.929710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.929880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.929909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.930062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.930089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.930279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.930305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.930487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.930513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.930662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.930688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.930876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.930904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.931069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.931096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.931277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.931304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.931480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.931507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.931715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.931741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.931903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.931939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.932100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.932127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.932308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.932335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.932523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.932549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-14 04:50:36.932732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.932758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2949503 00:34:17.021 [2024-07-14 04:50:36.932919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-14 04:50:36.932946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:17.021 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2949503 00:34:17.021 [2024-07-14 04:50:36.933151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.933182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2949503 ']' 00:34:17.022 [2024-07-14 04:50:36.933345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.933371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.022 [2024-07-14 04:50:36.933547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:17.022 [2024-07-14 04:50:36.933573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.022 [2024-07-14 04:50:36.933728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.933755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:17.022 04:50:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.022 [2024-07-14 04:50:36.933940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.933967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.934131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.934158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.934337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.934364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.934518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.934545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.934722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.934748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.934931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.934965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.935121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.935147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.935359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.935385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.935541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.935567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.935752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.935783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.935966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.935993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.936200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.936226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.936449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.936478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.936693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.936721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.936889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.936916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.937170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.937199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.937473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.937502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.937681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.937707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.937888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.937934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.938169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.938198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.938436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-14 04:50:36.938469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-14 04:50:36.938647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.938673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.938887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.938932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.939136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.939167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.939425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.939454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.939631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.939657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.939841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.939873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.940057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.940086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.940302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.940331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.940560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.940589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.940792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.940818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.941025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.941054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.941287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.941316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.941597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.941625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.941799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.941825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.942045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.942074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.942305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.942333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.942585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.942612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.942840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.942871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.943074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.943102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.943313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.943340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-14 04:50:36.943563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-14 04:50:36.943592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.943799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.943826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.944006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.944033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.944253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.944280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.944480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.944507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.944706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.944732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.944943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.944971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.945161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.945188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.945401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.945429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.945630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.945656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.945807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.945835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.946030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.946057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.946287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.946315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.946541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.946568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.946762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.946789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.946976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.947004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.947187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.947213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.947391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.947417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.947625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.947651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.947807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.947837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.948008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.948035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.948222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.948247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.948457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.948483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.948690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.948716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.948899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.948926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.949108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.949135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.949344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.949370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.949555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-14 04:50:36.949581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-14 04:50:36.949791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.949816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.949991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.950017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.950176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.950203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.950385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.950411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.950571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.950597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.950760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.950786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.950944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.950970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.951116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.951142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.951321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.951347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.951499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.951526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.951709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.951736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.951912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.951938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.952099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.952125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.952329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.952355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.952512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.952539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.952745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.952771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.952958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.952985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.953169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.953195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.953381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.953407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.953588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.953614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.953774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.953800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.953991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.954018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.954199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.954224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.954430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.954456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.954642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.954668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.954813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.954839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.955037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.955065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.025 [2024-07-14 04:50:36.955252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.025 [2024-07-14 04:50:36.955279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.025 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.955433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.955459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.955630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.955656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.955836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.955862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.956047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.956077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.956261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.956288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.956448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.956476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.956633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.956660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.956877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.956904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.957095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.957121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.957324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.957350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.957507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.957534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.957713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.957739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.957901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.957929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.958117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.958143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.958336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.958362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.958540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.958566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.958777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.958803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.958994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.959020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.959171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.959196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.959406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.959432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.959584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.959612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.959793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.959819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.959981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.960008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.960167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.960193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.960374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.960400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.960582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.960608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.960784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.960810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.960965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.960992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.961181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.961207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.961389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.961415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.961603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.961629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.961833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.961859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.962029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.026 [2024-07-14 04:50:36.962055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.026 qpair failed and we were unable to recover it. 00:34:17.026 [2024-07-14 04:50:36.962238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.962265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.962424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.962451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.962638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.962664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.962822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.962849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.963009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.963036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.963244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.963270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.963472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.963498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.963705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.963731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.963920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.963947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.964123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.964148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.964326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.964357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.964580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.964606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.964758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.964784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.964963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.964990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.965144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.965170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.965321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.965348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.965556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.965581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.965761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.965787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.965952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.965978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.966157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.966183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.966358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.966383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.966563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.966590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.966773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.966798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.966983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.967010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.967171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.967197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.967374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.967400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.967548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.967574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.967786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.967812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.967984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.968011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.968192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.968218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.968387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.968413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.968620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.968646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.968796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.968823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.969014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.969041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.969200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.969226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.969376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.969417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.969603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.969629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.969796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.969821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.970000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.970027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.970213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.970239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.027 [2024-07-14 04:50:36.970413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.027 [2024-07-14 04:50:36.970439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.027 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.970586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.970627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.970812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.970837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.971036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.971062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.971245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.971271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.971424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.971450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.971599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.971625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.971810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.971836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.972042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.972069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.972221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.972262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.972450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.972480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.972683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.972709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.972881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.972908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.973091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.973118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.973301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.973327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.973511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.973538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.973720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.973746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.973928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.973955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.974137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.974163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.974419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.974445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.974628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.974654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.974803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.974844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.975033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.975059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.975229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.975254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.975506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.975532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.975714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.975741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.975971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.975997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.976184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.976210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.976358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.976384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.976536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.976561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.976743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.976770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.976991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.977018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.977179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.977205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.977386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.977412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.028 qpair failed and we were unable to recover it. 00:34:17.028 [2024-07-14 04:50:36.977588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.028 [2024-07-14 04:50:36.977614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.977766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.977792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.977971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.977998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.978211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.978238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.978406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.978432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.978725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.978753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.978958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.978985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.979154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.979180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.979387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.979413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.979624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.979650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.979833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.979860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.980046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.980073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.980226] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:17.029 [2024-07-14 04:50:36.980255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.980282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.980302] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.029 [2024-07-14 04:50:36.980470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.980496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.980677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.980705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.980890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.980923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.981109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.981136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.981376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.981403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.981592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.981619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.981804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.981831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.982037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.982064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.982268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.982294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.982480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.982507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.982659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.982686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.982889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.982915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.983135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.983162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.983343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.983370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.983527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.983555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.983772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.983798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.983962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.983988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.984261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.984288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.984470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.984496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.984677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.984703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.984856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.984887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.985060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.985086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.985273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.985301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.985464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.985492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.985700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.985727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.985886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.985913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.986094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.029 [2024-07-14 04:50:36.986121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.029 qpair failed and we were unable to recover it. 00:34:17.029 [2024-07-14 04:50:36.986332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.986358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.986515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.986541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.986732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.986760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.986929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.986960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.987158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.987185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.987375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.987402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.987580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.987607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.987763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.987790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.987999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.988026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.988204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.988230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.988410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.988436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.988584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.988611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.988760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.988786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.988970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.988996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.989182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.989209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.989433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.989464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.989644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.989670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.989848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.989904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.990097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.990123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.990278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.990311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.990465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.990491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.990643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.990671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.990904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.990931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.991110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.991137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.991333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.991359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.991517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.991544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.991754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.991780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.991965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.991992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.992178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.992204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.992386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.992412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.992629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.992655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.993533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.993575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.993820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.993846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.994082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.994109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.994306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.994334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.994541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.994577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.994764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.994791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.995468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.995519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.995736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.995763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.995980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.996007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.030 [2024-07-14 04:50:36.996191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.030 [2024-07-14 04:50:36.996217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.030 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.996372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.996398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.996588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.996614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.996796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.996822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.997019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.997046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.997260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.997286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.997471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.997497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.997675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.997701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.997871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.997898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.998091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.998119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.998333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.998359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.998556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.998582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.998763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.998789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.998976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.999003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.999161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.999187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.999364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.999394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.999576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.999602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.999760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:36.999787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:36.999975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.000002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.000208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.000234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.000386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.000413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.000622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.000649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.000828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.000854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.001046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.001072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.001287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.001313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.001488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.001514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.001694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.001721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.001894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.001921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.002104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.002130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.002286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.002313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.002497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.002523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.002706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.002732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.002914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.002940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.003125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.003151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.003342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.003368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.003528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.003553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.003732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.003758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.003945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.003971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.004180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.004206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.004379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.004405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.004585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.004611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.031 [2024-07-14 04:50:37.004785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.031 [2024-07-14 04:50:37.004811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.031 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.004978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.005005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.005188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.005214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.005394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.005421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.005679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.005705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.005916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.005942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.006128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.006154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.006347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.006373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.006552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.006578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.006764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.006791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.006960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.006988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.007140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.007166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.007418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.007444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.007653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.007679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.007854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.007893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.008108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.008134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.008310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.008336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.008546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.008572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.008750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.008776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.008967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.008993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.009174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.009200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.009378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.009403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.009586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.009613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.009774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.009800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.009956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.009983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.010196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.010223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.010416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.010443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.010624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.010650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.010863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.010896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.011055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.011081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.011277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.011302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.011492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.011518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.011700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.011726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.011912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.011939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.012119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.012147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.012360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.012387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.012595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.012621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.032 [2024-07-14 04:50:37.012831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.032 [2024-07-14 04:50:37.012857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.032 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.013016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.013042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.013197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.013224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.013433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.013460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.013680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.013705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.013910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.013937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.014092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.014119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.014318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.014344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.014528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.014554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.014731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.014757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.014948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.014975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.015154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.015181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.015366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.015392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.015603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.015629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.015808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.033 [2024-07-14 04:50:37.015834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.016020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.016046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.016237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.016263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.016448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.016474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.016657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.016683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.016829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.016856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.017073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.017098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.017256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.017282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.017496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.017522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.017698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.017724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.017906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.017934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.018118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.018144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.018291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.018317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.018493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.018519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.018677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.018703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.018873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.018900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.019060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.019092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.019339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.019365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.019554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.019580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.019761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.019787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.019965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.019992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.020171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.020197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.020383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.020409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.020564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.020590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.033 [2024-07-14 04:50:37.020747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.033 [2024-07-14 04:50:37.020789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.033 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.020981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.021009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.021164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.021192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.021377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.021404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.021606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.021633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.021816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.021841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.022016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.022043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.022223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.022250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.022432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.022458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.022609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.022635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.022821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.022849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.023009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.023034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.023233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.023258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.023408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.023433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.023617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.023647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.023802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.023830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.024013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.024039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.024192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.024219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.024398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.024425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.024603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.024629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.024783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.024810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.025032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.025059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.025216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.025243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.025406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.025434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.025612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.025639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.025820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.025848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.026038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.026065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.026205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.026231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.026385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.026411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.026586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.026612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.026818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.026845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.026998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.027025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.027170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.027201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.027387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.027413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.027588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.027614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.027802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.027829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.027989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.028016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.028196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.028222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.028402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.028428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.028636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.028662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.028833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.028859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.034 qpair failed and we were unable to recover it. 00:34:17.034 [2024-07-14 04:50:37.029043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.034 [2024-07-14 04:50:37.029070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.029223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.029249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.029425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.029451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.029609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.029636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.029819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.029845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.030045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.030072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.030265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.030291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.030449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.030475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.030657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.030684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.030840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.030872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.031084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.031109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.031297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.031324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.031474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.031500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.031684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.031711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.031898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.031925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.032106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.032132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.032310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.032336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.032547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.032574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.032736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.032768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.032960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.032988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.033167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.033195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.033347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.033374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.033561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.033587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.033743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.033770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.033956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.033983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.034167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.034195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.034372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.034398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.034580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.034606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.034779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.034805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.034998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.035025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.035178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.035205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.035384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.035409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.035573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.035601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.035807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.035834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.036059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.036087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.036249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.036275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.036459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.036485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.036670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.036698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.036848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.036879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.037090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.037116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.037297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.035 [2024-07-14 04:50:37.037323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.035 qpair failed and we were unable to recover it. 00:34:17.035 [2024-07-14 04:50:37.037478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.037504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.037688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.037714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.037903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.037930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.038110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.038135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.038301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.038327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.038480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.038507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.038659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.038685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.038846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.038879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.039064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.039091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.039255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.039282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.039493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.039519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.039669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.039695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.039840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.039870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.040051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.040077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.040241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.040266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.040442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.040468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.040644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.040670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.040848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.040884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.041092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.041118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.041294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.041320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.041495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.041521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.041676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.041702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.041889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.041916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.042095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.042121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.042325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.042351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.042534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.042560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.042772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.042797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.042971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.042998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.043179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.043205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.043383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.043409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.043625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.043651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.043806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.043832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.044019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.044045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.044207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.044233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.044389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.044417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.044597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.044623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.044772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.044798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.044984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.045010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.045187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.045213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.045394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.045420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.045627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.036 [2024-07-14 04:50:37.045652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.036 qpair failed and we were unable to recover it. 00:34:17.036 [2024-07-14 04:50:37.045824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.045850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.046027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.046054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.046211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.046237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.046448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.046474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.046636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.046664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.046845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.046881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.047061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.047088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.047282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.047308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.047468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.047494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.047678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.047704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.047854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.047887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.048069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.048095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.048272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.048298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.048453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.048479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.048687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.048712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.048893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.048919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.049094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.049125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.049308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.049334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.049513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.049539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.049724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.049751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.049927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.049954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.050139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.050165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.050342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.050369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.050544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.050570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.050747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.050772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.050947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.050975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.051118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.051145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.051352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.051378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.051578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.051604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.051755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.051781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.051966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.051992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.052171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.052196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.052374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.052401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.052550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.052576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.052757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.037 [2024-07-14 04:50:37.052783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.037 qpair failed and we were unable to recover it. 00:34:17.037 [2024-07-14 04:50:37.052933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.052960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.053138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.053164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.053371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.053398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.053545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.053572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.053776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.053802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.054006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.054033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.054047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.038 [2024-07-14 04:50:37.054209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.054235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.054446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.054472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.054688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.054714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.054922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.054949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.055226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.055252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.055436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.055462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.055672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.055698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.055880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.055907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.056116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.056142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.056321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.056347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.056555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.056582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.056765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.056791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.056943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.056969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.057178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.057204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.057411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.057436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.057615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.057641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.057796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.057822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.058006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.058033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.058187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.058215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.058358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.058384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.058550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.058576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.058796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.058822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.059008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.059035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.059187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.059214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.059487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.059514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.059692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.059718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.059905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.059932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.060090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.060118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.060329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.060360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.060516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.060543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.060722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.060749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.060955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.060982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.061169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.061195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.061493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.061519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.038 [2024-07-14 04:50:37.061699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.038 [2024-07-14 04:50:37.061726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.038 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.061914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.061941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.062126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.062152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.062335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.062362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.062614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.062640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.062825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.062851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.063014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.063040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.063251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.063277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.063458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.063485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.063640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.063666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.063819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.063847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.064112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.064138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.064303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.064330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.064513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.064539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.064694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.064720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.064913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.064939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.065149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.065175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.065357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.065383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.065539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.065564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.065734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.065760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.065977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.066004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.066200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.066227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.066423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.066451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.066630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.066657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.066838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.066871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.067078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.067105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.067292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.067320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.067500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.067527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.067686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.067713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.067875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.067902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.068086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.068113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.068304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.068332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.068494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.068522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.068682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.068709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.068870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.068902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.069058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.069084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.069262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.069287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.069441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.069467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.069645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.069671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.069836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.069877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.070061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.070088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.039 qpair failed and we were unable to recover it. 00:34:17.039 [2024-07-14 04:50:37.070241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.039 [2024-07-14 04:50:37.070268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.070453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.070480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.070633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.070675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.070889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.070915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.071097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.071123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.071304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.071330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.071541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.071567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.071783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.071809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.071970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.071997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.072166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.072192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.072348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.072375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.072531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.072559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.072745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.072772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.072987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.073014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.073168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.073194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.073405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.073431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.073591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.073617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.073798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.073825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.074032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.074059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.074209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.074243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.074429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.074456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.074638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.074664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.074806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.074832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.074998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.075026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.075207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.075233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.075391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.075417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.075624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.075650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.075801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.075827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.076008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.076034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.076190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.076216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.076420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.076446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.076593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.076619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.076798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.076825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.077040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.077073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.077261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.077288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.077440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.077466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.077643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.077669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.077847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.077883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.078065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.078092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.078286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.078312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.078522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.078548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.040 qpair failed and we were unable to recover it. 00:34:17.040 [2024-07-14 04:50:37.078709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.040 [2024-07-14 04:50:37.078741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.078932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.078959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.079163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.079189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.079343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.079369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.079550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.079576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.079735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.079761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.079945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.079972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.080124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.080151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.080318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.080344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.080535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.080562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.080742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.080768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.080955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.080982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.081141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.081176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.081359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.081384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.081566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.081592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.081745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.081771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.081933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.081961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.082142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.082169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.082362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.082388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.082577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.082604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.082783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.082808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.082974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.083002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.083186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.083212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.083420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.083446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.083600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.083627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.083811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.083837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.084017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.084043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.084229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.084255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.084410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.084436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.084618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.084646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.084875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.084902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.085081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.085107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.085273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.085303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.085485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.085511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.085714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.041 [2024-07-14 04:50:37.085740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.041 qpair failed and we were unable to recover it. 00:34:17.041 [2024-07-14 04:50:37.085902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.085929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.086093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.086119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.086299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.086326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.086503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.086529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.086681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.086707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.086892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.086920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.087135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.087161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.087338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.087364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.087547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.087573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.087756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.087783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.087961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.087989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.088151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.088184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.088362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.088389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.088551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.088579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.088763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.088789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.088970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.088997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.089202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.089229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.089375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.089401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.089576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.089602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.089784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.089810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.089968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.089995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.090173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.090200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.090386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.090412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.090595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.090622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.090804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.090831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.091017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.091058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.091277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.091305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.091466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.091493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.091646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.091672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.091854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.091888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.092040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.092066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.092225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.092251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.092430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.092455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.092634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.092659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.092837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.092862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.093027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.093053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.093239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.093265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.093445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.093475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.093662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.093688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.093848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.093889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.094044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.042 [2024-07-14 04:50:37.094069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.042 qpair failed and we were unable to recover it. 00:34:17.042 [2024-07-14 04:50:37.094253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.094279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.094462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.094487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.094693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.094719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.094896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.094923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.095108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.095134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.095289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.095315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.095501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.095528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.095734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.095759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.095921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.095948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.096105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.096130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.096314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.096340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.096523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.096549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.096727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.096753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.096959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.096985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.097165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.097189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.097346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.097370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.097550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.097575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.097778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.097804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.097974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.098000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.098182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.098207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.098393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.098418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.098575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.098601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.098763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.098789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.099004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.099030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.099215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.099240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.099416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.099441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.099605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.099631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.099809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.099835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.099996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.100021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.100198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.100224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.100379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.100405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.100588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.100614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.100769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.100793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.100974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.100999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.101182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.101207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.101362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.101387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.101608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.101638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.101788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.101813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.102001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.102026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.102335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.102360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.102561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.043 [2024-07-14 04:50:37.102586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.043 qpair failed and we were unable to recover it. 00:34:17.043 [2024-07-14 04:50:37.102796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.102822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.102995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.103022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.103187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.103213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.103422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.103447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.103604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.103629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.103785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.103810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.103999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.104025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.104183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.104208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.104369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.104395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.104574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.104599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.104778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.104804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.104978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.105005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.105183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.105208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.105385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.105412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.105621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.105645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.105826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.105858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.106043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.106070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.106257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.106283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.106465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.106491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.106659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.106684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.106839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.106871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.107055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.107081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.107250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.107275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.107510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.107536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.107717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.107742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.107926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.107952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.108106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.108130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.108338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.108364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.108540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.108565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.108741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.108766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.108988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.109014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.109168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.109192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.109373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.109399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.109577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.109603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.109789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.109815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.110083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.110113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.110302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.110328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.110539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.110564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.110745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.110770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.110984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.111011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.044 [2024-07-14 04:50:37.111164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.044 [2024-07-14 04:50:37.111190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.044 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.111363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.111389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.111564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.111590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.111744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.111769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.111957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.111983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.112171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.112196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.112376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.112402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.112580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.112605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.112784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.112808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.113004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.113031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.113187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.113214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.113370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.113395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.113547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.113572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.113747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.113773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.113929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.113956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.114120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.114146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.114326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.114351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.114532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.114557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.114739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.114764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.114921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.114947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.115120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.115144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.115288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.115314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.115479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.115506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.115662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.115687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.115873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.115899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.116078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.116105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.116267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.116292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.116458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.116483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.116636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.116662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.116862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.116895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.117052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.117078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.117257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.117283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.117461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.117486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.117660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.117685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.117891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.045 [2024-07-14 04:50:37.117917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.045 qpair failed and we were unable to recover it. 00:34:17.045 [2024-07-14 04:50:37.118073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.118102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.118291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.118317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.118472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.118498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.118645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.118669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.118849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.118897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.119048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.119074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.119278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.119302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.119450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.119474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.119653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.119679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.119853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.119884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.120071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.120096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.120286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.120311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.120490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.120514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.120670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.120695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.120884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.120909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.121063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.121089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.121253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.121278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.121452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.121478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.121682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.121707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.121875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.121901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.122082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.122107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.122285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.122310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.122462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.122487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.122638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.122664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.122838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.122863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.123058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.123083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.123242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.123267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.123438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.123464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.123669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.123694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.123844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.123876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.124083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.124108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.124260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.124287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.124471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.124496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.124676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.124701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.124880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.124905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.125092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.125118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.125292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.125317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.125518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.125544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.125747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.125773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.125964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.125989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.126201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.046 [2024-07-14 04:50:37.126225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.046 qpair failed and we were unable to recover it. 00:34:17.046 [2024-07-14 04:50:37.126418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.126443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.126635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.126660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.126814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.126839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.126997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.127022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.127179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.127205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.127352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.127379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.127524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.127550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.127733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.127758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.127962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.127988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.128191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.128216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.128396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.128422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.128581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.128606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.128759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.128784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.128974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.129000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.129158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.129184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.129336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.129362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.129535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.129560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.129766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.129792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.129958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.129984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.130192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.130218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.130370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.130396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.130546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.130570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.130750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.130775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.130984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.131011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.131164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.131188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.131336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.131361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.131534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.131564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.131743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.131768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.131954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.131980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.132171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.132197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.132380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.132405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.132583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.132609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.132816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.132843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.133049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.133074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.133232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.133258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.133442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.133466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.133678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.133703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.133908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.133934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.134082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.134107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.134307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.134333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.047 [2024-07-14 04:50:37.134518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.047 [2024-07-14 04:50:37.134544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.047 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.134726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.134751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.134910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.134936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.135120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.135145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.135322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.135348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.135541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.135566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.135772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.135797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.136004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.136030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.136212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.136238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.136428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.136454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.136633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.136659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.136836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.136862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.137022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.137047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.137267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.137292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.137499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.137524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.137676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.137701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.137895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.137922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.138109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.138134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.138284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.138309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.138512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.138538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.138713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.138739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.138918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.138943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.139106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.139131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.139277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.139302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.139487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.139513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.139691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.139716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.139921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.139970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.140123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.140148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.140320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.140345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.140559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.140585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.140743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.140768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.140947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.140972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.141177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.141203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.141406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.141431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.141587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.141612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.141765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.141790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.141978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.142005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.142186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.142212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.142417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.142442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.142604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.142629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.142833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.142858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.048 [2024-07-14 04:50:37.143073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.048 [2024-07-14 04:50:37.143099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.048 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.143260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.143286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.143492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.143517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.143711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.143737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.143924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.143950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.144128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.144160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.144338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.144364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.144546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.144572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.144721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.144746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.144893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.144919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.145088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.145113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.145261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.145286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.145482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.145507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.145660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.145685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.145833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.145859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.146010] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.049 [2024-07-14 04:50:37.146017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.146042] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.049 [2024-07-14 04:50:37.146047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 [2024-07-14 04:50:37.146057] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.146070] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.049 [2024-07-14 04:50:37.146081] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.049 [2024-07-14 04:50:37.146200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.146225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.146368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.146398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.146400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:17.049 [2024-07-14 04:50:37.146455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:17.049 [2024-07-14 04:50:37.146453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:17.049 [2024-07-14 04:50:37.146556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.146426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:17.049 [2024-07-14 04:50:37.146583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.146740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.146765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.146948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.146974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.147144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.147177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.147360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.147386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.147577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.147603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.147753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.147780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.147952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.147979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.148167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.148193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.148340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.148366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.148547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.148573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.148734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.148760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.149030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.149058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.149250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.149277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.149434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.149461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.149627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.149654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.149821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.149857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.150047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.150076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.150263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.150288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.150433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.150459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.049 qpair failed and we were unable to recover it. 00:34:17.049 [2024-07-14 04:50:37.150650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.049 [2024-07-14 04:50:37.150676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.150827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.150859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.151027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.151052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.151212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.151238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.151425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.151451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.151603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.151628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.151791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.151816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.152055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.152081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.152265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.152291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.152538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.152563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.152707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.152733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.152922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.152949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.153102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.153127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.153404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.153429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.153601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.153627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.153801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.153826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.154010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.154036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.154222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.154247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.154539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.154564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.154744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.154771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.154945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.154970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.155115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.155140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.155438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.155463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.155606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.155632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.155825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.155860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.156071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.156100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.156286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.156311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.156458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.156483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.156658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.156683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.156871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.156897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.157054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.157078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.157241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.157266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.157533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.157558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.157741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.157767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.157921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.050 [2024-07-14 04:50:37.157946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.050 qpair failed and we were unable to recover it. 00:34:17.050 [2024-07-14 04:50:37.158097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.158123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.158317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.158343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.158492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.158522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.158700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.158726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.158996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.159021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.159170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.159194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.159402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.159427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.159580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.159605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.159763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.159788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.159958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.159983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.160164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.160190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.160346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.160371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.160531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.160564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.160741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.160767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.160932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.160959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.161129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.161158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.161324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.161351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.161504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.161530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.161711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.161737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.161922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.161949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.162100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.162125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.162276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.162302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.162471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.162497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.162644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.162670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.162832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.162857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.163075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.163103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.163256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.163281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.163440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.163466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.163612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.163637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.163852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.163883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.164032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.164057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.164343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.164369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.164553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.164579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.164734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.164761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.164952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.164978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.165173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.165198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.165346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.165371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.165555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.165580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.165761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.165786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.165981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.166007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.166183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.051 [2024-07-14 04:50:37.166208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.051 qpair failed and we were unable to recover it. 00:34:17.051 [2024-07-14 04:50:37.166383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.166409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.166590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.166620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.166766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.166791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.166986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.167012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.167172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.167198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.167381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.167407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.167572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.167597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.167754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.167779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.167982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.168008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.168162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.168187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.168345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.168371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.168526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.168552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.168707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.168732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.168912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.168938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.169094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.169120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.169290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.169315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.169472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.169497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.169643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.169670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.169832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.169859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.170035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.170061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.170218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.170243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.170412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.170438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.170612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.170637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.170814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.170839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.171025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.171069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.171240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.171269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.171483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.171510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.171672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.171699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.171854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.171895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.172054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.172080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.172235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.172260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.172401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.172425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.172584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.172609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.172762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.172787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.172963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.172988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.173141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.173167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.173342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.173368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.173558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.173583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.173744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.173771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.173930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.173956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.052 [2024-07-14 04:50:37.174110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.052 [2024-07-14 04:50:37.174136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.052 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.174294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.174324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.174487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.174512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.174695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.174721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.174873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.174907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.175117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.175152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.175343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.175370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.175526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.175551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.175713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.175738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.175902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.175928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.176104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.176129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.176311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.176337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.176502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.176529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.176715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.176741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.176899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.176926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.177106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.177132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.177287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.177314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.177495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.177525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.177675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.177707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.177917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.177945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.178108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.178134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.178328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.178354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.178509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.178536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.178713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.178738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.178891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.178917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.179090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.179115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.179267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.179292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.179454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.179479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.179625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.179650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.179818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.179845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.180024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.180052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.053 [2024-07-14 04:50:37.180234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.053 [2024-07-14 04:50:37.180260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.053 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-14 04:50:37.180435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-14 04:50:37.180461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-14 04:50:37.180614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-14 04:50:37.180640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-14 04:50:37.180795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-14 04:50:37.180821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-14 04:50:37.180998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-14 04:50:37.181024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-14 04:50:37.181215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-14 04:50:37.181240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-14 04:50:37.181413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-14 04:50:37.181438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-14 04:50:37.181618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.181643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.181826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.181851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.182008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.182033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.182301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.182334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.182534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.182558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.182717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.182743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.182901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.182927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.183117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.183142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.183406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.183431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.183581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.183606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.183763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.183788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.183968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.183994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.184146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.184172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.184319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.184345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.184523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.184548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.184719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.184745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.184894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.184920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.185104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.185128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.185312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.185336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.185531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.185557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.185729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.185754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.185905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.185931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.186190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.186215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.186421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.186446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.186630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.186654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.186802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.186826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.186988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.187014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.187254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.187279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.187445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.187472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.187612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.187637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.187809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.187838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.188017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.188044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.188249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.188275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.188422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.188447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.188603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.188627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.188806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.188831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.188990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.189016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.189272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.189297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.189466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.189491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.189662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.189687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.189834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.189859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.190097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.190123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.190270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.190295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.190476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.190500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.190659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.190684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.190831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.190857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.191021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.191046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.191203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.191228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.191408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.191433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.191607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.191632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.191812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.191838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.192024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.192049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.192214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.192239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.192384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.192409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.192572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.192596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.192741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.192765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.192916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.192943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.193131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.193158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.193320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.193345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.193513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.193537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.193719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.193744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.193891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.193917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.194067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.194093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.194252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.194280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.194429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.194455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-14 04:50:37.194612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-14 04:50:37.194637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.194844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.194874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.195033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.195058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.195238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.195263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.195455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.195481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.195649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.195678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.195829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.195853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.196033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.196058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.196206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.196232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.196409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.196433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.196610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.196644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.196804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.196830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.196984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.197010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.197181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.197207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.197368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.197392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.197641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.197665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.197833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.197858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.198015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.198041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.198185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.198209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.198360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.198385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.198595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.198621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.198806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.198831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.198989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.199015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.199192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.199217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.199392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.199417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.199578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.199602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.199765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.199790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.199963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.199989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.200145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.200170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.200315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.200339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.200517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.200541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.200690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.200716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.200870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.200895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.201075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.201101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.201253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.201279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.201459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.201483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.201686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.201710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.201876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.201901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.202052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.202077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.202231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.202255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.202399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.202425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.202594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.202619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.202774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.202801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.202986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.203011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.203160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.203190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.203375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.203405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.203578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.203603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.203808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.203833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.204135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.204161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.204323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.204348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.204499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.204525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.204683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.204708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.204885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.204911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.205087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.205112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.205266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.205292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.205445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.205470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.205652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.205677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.205822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.205848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.206094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.206120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.206278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.206304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.206480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.206506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.206658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.206685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-14 04:50:37.206869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-14 04:50:37.206897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.207069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.207095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.207245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.207270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.207414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.207440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.207593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.207620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.207767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.207794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.207955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.207981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.208140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.208166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.208315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.208341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.208523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.208551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.208755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.208780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.208994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.209020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.209170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.209197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.209372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.209397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.209555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.209580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.209728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.209753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.209929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.209955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.210103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.210129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.210415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.210440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.210600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.210626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.210776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.210802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.210996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.211022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.211175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.211200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.211348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.211377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.211518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.211543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.211689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.211715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.211924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.211949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.212114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.212140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.212315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.212340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.212494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.212519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.212709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.212736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.212918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.212944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.213098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.213124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.213277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.213305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.213466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.213493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.213676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.213702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.213871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.213897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.214093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.214119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.214291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.214316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.214515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.214540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.214685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.214711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.214907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.214932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.215108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.215133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.215296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.215322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.215471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.215497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.215673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.215699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.215848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.215878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.216065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.216091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.216278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.216304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.216480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.216505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.216705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.216731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.216887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.216914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.217092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.217118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.217291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.217316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.217475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.217500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.217680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.217706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.217883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.217909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.218068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.218092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.218340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.218365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.218549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.218573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.218727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.218753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.218911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.218938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-14 04:50:37.219121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-14 04:50:37.219147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.219298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.219326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.219477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.219504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.219683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.219708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.219872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.219897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.220079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.220103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.220297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.220323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.220480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.220505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.220676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.220702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.220886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.220912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.221066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.221093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.221301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.221327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.221478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.221504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.221681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.221706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.221891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.221917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.222103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.222129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.222281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.222308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.222500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.222526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.222675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.222700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.222854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.222886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.223065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.223091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.223249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.223273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.223451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.223475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.223629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.223655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.223826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.223851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.224001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.224026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.224169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.224193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.224370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.224395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.224578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.224603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.224761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.224786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.224942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.224968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.225169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.225193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.225351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.225375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.225533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.225557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.225740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.225765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.225924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.225949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.226126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.226151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.226325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.226350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.226518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.226544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.226693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.226719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.226894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.226919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.227061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.227090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.227257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.227284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.227459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.227483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.227660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.227685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.227826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.227852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.228058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.228084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.228232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.228258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.228439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.228464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.228620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.228646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.228795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.228820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.228973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.228999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.229173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.229199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.229338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.229363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.229573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.229598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.229785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.229811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.229992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.230018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.230209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.230235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.230376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.230401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.230565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.230590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.230881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-14 04:50:37.230908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-14 04:50:37.231102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.231128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.231276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.231302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.231473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.231497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.231646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.231671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.231882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.231908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.232102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.232127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.232277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.232303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.232484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.232509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.232660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.232686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.232863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.232893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.233049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.233075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.233263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.233288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.233449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.233474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.233669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.233694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.233847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.233897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.234085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.234111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.234271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.234295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.234450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.234476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.234622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.234647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.234820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.234844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.235026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.235055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.235202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.235227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.235413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.235439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.235588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.235613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.235767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.235791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.235974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.236001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.236209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.236234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.236383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.236409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.236567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.236591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.236859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.236889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.237083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.237108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.237258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.237282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.237432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.237456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.237596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.237621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.237775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.237801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.237983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.238008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.238151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.238176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.238358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.238383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.238564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.238591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.238764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.238788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.238945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.238971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.239114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.239140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.239294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.239319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.239468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.239492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.239671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.239696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.239847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.239878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.240059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.240084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.240262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.240286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.240432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.240457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.240652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.240677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.240826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.240851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.241037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.241061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.241324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.241349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.241511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.241537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.241725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.241749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.241898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.241923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.242088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.242113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.242266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.242291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.242448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.242472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.242651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.242675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.242824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.242855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.243043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.243069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.243237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.243261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.243422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.243449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.243626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.243652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.243825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.243849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.244024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.244050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.244201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.244226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.244399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.244424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.244590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-14 04:50:37.244616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-14 04:50:37.244796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.244820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.244982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.245007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.245187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.245212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.245362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.245386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.245564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.245588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.245772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.245797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.245957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.245984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.246202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.246230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.246417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.246444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.246626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.246652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.246802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.246826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.246988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.247014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.247178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.247204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.247386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.247413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.247599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.247626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.247803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.247827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.247994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.248020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.248066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14090f0 (9): Bad file descriptor 00:34:17.329 [2024-07-14 04:50:37.248303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.248352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.248567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.248595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.248743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.248770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.248943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.248970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.249149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.249175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.249359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.249385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.249535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.249561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.249743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.249770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.249926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.249954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.250113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.250139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.250326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.250352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.250535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.250561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.250770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.250796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.250972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.250999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.251149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.251175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.251340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.251366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.251543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.251568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.251755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.251782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.251962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.251988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.252138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.252164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.252352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.252378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.252528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.252555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.252708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.252734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.252924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.252950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.253116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.253147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.253299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.253325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.253474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.253504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.253660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.253686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.253843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.253885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.254064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.254089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.254278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.254304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.254483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.254509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.254665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.254690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.254878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-14 04:50:37.254906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-14 04:50:37.255058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.255085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.255252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.255278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.255420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.255446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.255595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.255620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.255783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.255808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.255977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.256017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.256230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.256258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.256430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.256456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.256639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.256665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.256849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.256880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.257030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.257056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.257239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.257265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.257462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.257488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.257657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.257682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.257839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.257873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.258063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.258088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.258243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.258269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.258415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.258441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.258623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.258650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.258839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.258869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.259017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.259043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.259193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.259218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.259373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.259398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.259545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.259572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.259753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.259779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.259928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.259955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.260131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.260156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.260300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.260325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.260499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.260524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.260698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.260723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.260876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.260902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.261074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.261099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.261265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.261295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.261450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.261476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.261621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.261647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.261806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.261831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.262001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.262026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.262182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.262209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.262356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.262382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.262533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.262559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.262719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.262745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.262922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.262949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.263103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.263129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.263295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.263320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.263481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.263506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.263681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.263706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.263878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.263904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.264085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.264111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.264317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.264342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.264498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.264523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.264697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.264724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.264877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.264903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.265082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.265107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.265252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.265277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.265427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.265452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.265622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.265647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.265827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.265853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.266014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.266039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.266188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.266215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.266418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.266444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.266622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.266648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.266802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.266829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.266994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.267033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.267193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.267221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.267417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.267443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.267611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.267637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.267847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.267878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.268058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.268084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.268285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.268311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.268459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.268485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.268656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-14 04:50:37.268682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-14 04:50:37.268861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.268892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.269071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.269102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.269277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.269303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.269452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.269478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.269632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.269658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.269806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.269832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.270019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.270046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.270241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.270267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.270412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.270438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.270610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.270636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.270834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.270860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.271069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.271096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.271253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.271280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.271437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.271463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.271637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.271663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.271850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.271888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.272039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.272065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.272225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.272251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.272437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.272461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.272616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.272640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.272788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.272814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.273026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.273052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.273202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.273227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.273412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.273437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.273585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.273610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.273811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.273836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.274000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.274026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.274174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.274199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.274356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.274381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.274578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.274603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.274748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.274773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:17.331 [2024-07-14 04:50:37.274948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.274973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:17.331 [2024-07-14 04:50:37.275143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.275167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.331 [2024-07-14 04:50:37.275334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.275359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.275526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.275551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.275724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.275749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.275928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.275953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.276096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.276122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.276273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.276298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.276489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.276519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.276663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.276697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.276847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.276880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.277060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.277086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.277256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.277281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.277454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.277479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.277640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.277665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.277837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.277862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.278060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.278093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.278246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.278271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.278438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.278472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.278629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.278656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.278805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.278829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.279024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.279050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.279209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.279234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.279392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.279418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.279567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.279600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.279756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.279781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.279927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.279953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.280099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.280124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.280286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.280312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.280466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.280492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.280697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-14 04:50:37.280723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-14 04:50:37.280879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.280905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.281067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.281092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.281274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.281299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.281503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.281529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.281682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.281709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.281882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.281908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.282081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.282106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.282263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.282289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.282474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.282499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.282651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.282676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.282870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.282910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.283100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.283127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.283275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.283302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.283484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.283509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.283666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.283693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.283837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.283863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.284030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.284056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.284206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.284244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.284446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.284472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.284652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.284678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.284860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.284892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.285046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.285073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.285232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.285262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.285467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.285494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.285650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.285676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.285834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.285862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.286037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.286063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.286264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.286290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.286462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.286487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.286681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.286707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.286862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.286893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.287083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.287108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.287296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.287321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.287502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.287527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.287676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.287702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.287851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.287890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.288044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.288070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.288251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.288277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.288432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.288457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.288602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.288627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.288793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.288825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.289001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.289029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.289175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.289208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.289383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.289408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.289582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.289607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.289782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.289807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.289968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.289994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.290148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.290179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.290348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.290375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.290569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.290603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.290751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.290776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.290928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.290954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.291136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.291162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.291312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.291337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.291488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.291514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.291688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.291712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.291864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.291895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.292043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.292072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.292237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.292262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.292434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.332 [2024-07-14 04:50:37.292460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.332 [2024-07-14 04:50:37.292653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.292679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.332 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.332 [2024-07-14 04:50:37.292845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.292887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.293051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.293076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.293248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.293273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.293460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.293485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.293649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.293674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.293856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.293888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.294041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.294065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.294217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.294242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.294396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-14 04:50:37.294422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-14 04:50:37.294576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.294601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.294748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.294773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.294920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.294946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.295104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.295129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.295312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.295336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.295512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.295537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.295702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.295727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.295908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.295934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.296089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.296113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.296265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.296289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.296433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.296458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.296636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.296662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.296817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.296846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.297005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.297031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.297338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.297362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.297546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.297571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.297721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.297746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.297899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.297924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.298099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.298124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.298333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.298358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.298508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.298534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.298680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.298705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.298854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.298886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.299040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.299066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.299233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.299259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.299420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.299444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.299597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.299623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.299809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.299835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.300027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.300054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.300225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.300250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.300420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.300444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.300619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.300645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.300821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.300846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.301024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.301064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.301259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.301287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.301431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.301458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.301604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.301630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.301808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.301834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.302111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.302138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.302425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.302451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.302604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.302630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.302814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.302840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.303027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.303053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.303214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.303240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.303384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.303411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.303593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.303619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.303794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.303820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.303988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.304015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4498000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.304218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.304257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.304418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.304444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.304602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.304627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.304781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.304806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.304998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.305029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.305174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.305204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.305384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.305409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.305589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.305615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.305769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.305794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.305954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.305980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.306134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.306158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.306339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.306364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.306551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.306577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.306731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.306756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.306955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.306980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.307235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.307261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.307404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.307429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.307604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.307630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.307910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.307937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.308123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.308149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.308313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.308339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.308515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.308541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.308708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-14 04:50:37.308733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-14 04:50:37.308949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.308975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.309141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.309166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.309316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.309342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.309499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.309525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.309676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.309701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.309878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.309905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.310073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.310099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.310282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.310307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.310481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.310506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.310685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.310710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.310859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.310890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.311044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.311069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.311247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.311271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.311425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.311450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.311657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.311683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.311873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.311899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.312050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.312075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.312244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.312270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.312435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.312460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.312651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.312678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.312828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.312861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.313070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.313101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.313265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.313290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.313440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.313466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.313645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.313670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.313826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.313852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.314104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.314130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.314341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.314366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.314524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.314549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.314702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.314726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.314884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.314910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.315066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.315091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.315270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.315296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.315468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.315492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.315678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.315703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.315895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.315920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.316067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.316094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.316291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.316317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 Malloc0 00:34:17.334 [2024-07-14 04:50:37.316493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.316518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.316683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.316708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.334 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:17.334 [2024-07-14 04:50:37.316910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.316937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.334 [2024-07-14 04:50:37.317099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.317125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.334 [2024-07-14 04:50:37.317276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.317301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.317464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.317490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.317648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.317675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.317825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.317850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.318019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.318048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.318206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.318232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.318378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.318404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.318609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.318634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.318786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.318811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.318998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.319024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.319178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.319204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.319356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.319380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.319559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.319584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.319755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.319780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.319943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.319970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.320116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.320142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.320143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.334 [2024-07-14 04:50:37.320303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.320328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.320506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.320536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.320691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.320716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.320863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.320893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.321104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-14 04:50:37.321130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-14 04:50:37.321298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.321324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.321509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.321534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.321706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.321732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.321892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.321918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.322099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.322125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.322313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.322340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.322521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.322547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.322707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.322732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.322907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.322932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.323106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.323132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.323327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.323352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.323504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.323529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.323690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.323715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.323923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.323949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.324106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.324132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.324284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.324310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.324491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.324516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.324667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.324694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.324872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.324898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.325061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.325087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.325247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.325271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.325418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.325442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.325618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.325643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.325822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.325848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.326043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.326069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.326227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.326254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.326430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.326456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.326611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.326637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.326845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.326877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.327062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.327088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.327250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.327275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.327448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.327473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.327622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.327648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.327833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.327859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.328051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.328078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.328271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.328297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.335 [2024-07-14 04:50:37.328494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.328520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.335 [2024-07-14 04:50:37.328700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.328726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.335 [2024-07-14 04:50:37.328913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.328940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.329090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.329117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.329280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.329307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.329472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.329497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.329648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.329673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.329833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.329860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.330053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.330079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.330247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.330272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.330421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.330447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.330626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.330651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.330795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.330821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.330989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.331015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.331193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.331219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.331391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.331416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.331566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.331592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.331738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.331763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.331950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.331977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.332140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.332165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.332315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.332340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.332496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.332521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.332704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.332730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.332885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.332911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.333066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.333093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.333304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.333334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.333530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.333555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.333734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.333759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.333946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.333972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.334166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.334192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.334349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.334376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.334556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.334582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.334746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-14 04:50:37.334772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-14 04:50:37.334960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.334986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.335146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.335171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.335352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.335378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.335537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.335563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.335711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.335736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.335881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.335908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.336083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.336109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.336292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.336317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.336 [2024-07-14 04:50:37.336474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.336500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.336 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.336 [2024-07-14 04:50:37.336672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.336698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.336 [2024-07-14 04:50:37.336881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.336908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.337059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.337085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.337264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.337290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.337446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.337472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.337647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.337672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.337826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.337851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.338010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.338036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.338183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.338212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.338395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.338422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.338598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.338624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.338829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.338854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.339005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.339030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.339175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.339201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.339375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.339399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.339578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.339603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.339773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.339798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.339953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.339979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.340164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.340189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.340357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.340382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.340540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.340565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.340717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.340743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.340952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.340979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.341129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.341154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.341313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.341339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.341492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.341518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.341692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.341718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.341919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.341945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.342096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.342121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.342298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.342324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.342480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.342506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.342656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.342682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.342878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.342904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.343069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.343095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.343285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.343311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.343514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.343539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.343713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.343738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.343899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.343925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.344090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.344116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.344270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.344295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.336 [2024-07-14 04:50:37.344465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.344490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.336 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.336 [2024-07-14 04:50:37.344687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.344714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.336 [2024-07-14 04:50:37.344860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.344893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.345074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.345099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.345274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.345298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.345472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.345498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.345679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.345706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.345890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.345917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.346085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.346110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.346257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.346281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.346466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.346493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.346648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-14 04:50:37.346672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-14 04:50:37.346820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-14 04:50:37.346846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.347008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-14 04:50:37.347034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.347184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-14 04:50:37.347209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.347361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-14 04:50:37.347385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.347537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-14 04:50:37.347564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.347746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-14 04:50:37.347771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.348005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-14 04:50:37.348031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.348214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-14 04:50:37.348239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f44a8000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.348359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.337 [2024-07-14 04:50:37.350885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.351070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.351098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.351114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.351127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.351164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.337 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:17.337 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.337 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.337 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.337 04:50:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2948980 00:34:17.337 [2024-07-14 04:50:37.360759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.360918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.360946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.360960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.360973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.361002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.370800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.370957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.370983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.370998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.371011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.371041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.380795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.380951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.380977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.380999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.381013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.381042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.390809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.390998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.391028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.391046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.391059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.391091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.400815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.401006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.401033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.401048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.401061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.401092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.410847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.411005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.411032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.411046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.411060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.411090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.420892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.421057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.421083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.421098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.421111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.421140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.430873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.431037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.431065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.431084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.431099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.431130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.440886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.441049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.441074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.441089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.441102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.441132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.450945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.451095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.451121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.451135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.451148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.451179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.460952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.461109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.461134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.461148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.461161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.461190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.471009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.471170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.471200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.471215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.471228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.471257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.481092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.481242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.481267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.481282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.481295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.481325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.491104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.491254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.491280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.491297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.491310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.491340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-14 04:50:37.501114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.337 [2024-07-14 04:50:37.501276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.337 [2024-07-14 04:50:37.501302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.337 [2024-07-14 04:50:37.501317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.337 [2024-07-14 04:50:37.501329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.337 [2024-07-14 04:50:37.501358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.511153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.511337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.511363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.511378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.511391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.511427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.521149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.521301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.521327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.521342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.521356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.521386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.531187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.531350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.531384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.531398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.531411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.531440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.541226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.541426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.541452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.541466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.541479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.541522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.551324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.551476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.551503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.551517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.551530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.551562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.561331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.561483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.561514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.561530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.561543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.561572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.571337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.571540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.571567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.571585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.571598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.571628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.581349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.581507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.581533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.581547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.581560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.581591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.591389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.591545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.591571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.591585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.591598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.591628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.601393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.601569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.601595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.601609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.601630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.597 [2024-07-14 04:50:37.601660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.597 qpair failed and we were unable to recover it. 00:34:17.597 [2024-07-14 04:50:37.611415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.597 [2024-07-14 04:50:37.611564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.597 [2024-07-14 04:50:37.611590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.597 [2024-07-14 04:50:37.611604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.597 [2024-07-14 04:50:37.611617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.611648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.621439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.621595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.621620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.621634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.621647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.621676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.631505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.631654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.631680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.631694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.631707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.631748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.641494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.641648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.641674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.641688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.641700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.641730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.651561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.651718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.651745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.651759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.651772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.651804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.661574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.661737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.661764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.661778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.661791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.661832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.671544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.671698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.671725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.671739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.671752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.671782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.681610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.681765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.681791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.681805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.681817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.681847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.691638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.691793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.691819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.691834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.691852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.691890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.701671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.701830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.701855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.701881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.701896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.701927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.711676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.711841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.711875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.711891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.711904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.598 [2024-07-14 04:50:37.711934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.598 qpair failed and we were unable to recover it. 00:34:17.598 [2024-07-14 04:50:37.721685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.598 [2024-07-14 04:50:37.721830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.598 [2024-07-14 04:50:37.721856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.598 [2024-07-14 04:50:37.721879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.598 [2024-07-14 04:50:37.721893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.599 [2024-07-14 04:50:37.721923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.599 qpair failed and we were unable to recover it. 00:34:17.599 [2024-07-14 04:50:37.731718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.599 [2024-07-14 04:50:37.731929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.599 [2024-07-14 04:50:37.731956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.599 [2024-07-14 04:50:37.731970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.599 [2024-07-14 04:50:37.731983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.599 [2024-07-14 04:50:37.732012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.599 qpair failed and we were unable to recover it. 00:34:17.599 [2024-07-14 04:50:37.741757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.599 [2024-07-14 04:50:37.741917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.599 [2024-07-14 04:50:37.741944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.599 [2024-07-14 04:50:37.741958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.599 [2024-07-14 04:50:37.741971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.599 [2024-07-14 04:50:37.742000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.599 qpair failed and we were unable to recover it. 00:34:17.599 [2024-07-14 04:50:37.751813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.599 [2024-07-14 04:50:37.751984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.599 [2024-07-14 04:50:37.752011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.599 [2024-07-14 04:50:37.752029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.599 [2024-07-14 04:50:37.752042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.599 [2024-07-14 04:50:37.752072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.599 qpair failed and we were unable to recover it. 00:34:17.599 [2024-07-14 04:50:37.761793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.599 [2024-07-14 04:50:37.761960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.599 [2024-07-14 04:50:37.761987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.599 [2024-07-14 04:50:37.762001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.599 [2024-07-14 04:50:37.762014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.599 [2024-07-14 04:50:37.762044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.599 qpair failed and we were unable to recover it. 00:34:17.599 [2024-07-14 04:50:37.771836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.599 [2024-07-14 04:50:37.772029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.599 [2024-07-14 04:50:37.772057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.599 [2024-07-14 04:50:37.772072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.599 [2024-07-14 04:50:37.772090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.599 [2024-07-14 04:50:37.772121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.599 qpair failed and we were unable to recover it. 00:34:17.599 [2024-07-14 04:50:37.781895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.599 [2024-07-14 04:50:37.782053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.599 [2024-07-14 04:50:37.782079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.599 [2024-07-14 04:50:37.782100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.599 [2024-07-14 04:50:37.782114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.599 [2024-07-14 04:50:37.782145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.599 qpair failed and we were unable to recover it. 00:34:17.859 [2024-07-14 04:50:37.791893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.859 [2024-07-14 04:50:37.792095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.859 [2024-07-14 04:50:37.792121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.859 [2024-07-14 04:50:37.792136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.859 [2024-07-14 04:50:37.792149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.859 [2024-07-14 04:50:37.792180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.859 qpair failed and we were unable to recover it. 00:34:17.859 [2024-07-14 04:50:37.801947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.859 [2024-07-14 04:50:37.802099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.859 [2024-07-14 04:50:37.802125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.859 [2024-07-14 04:50:37.802139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.859 [2024-07-14 04:50:37.802152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.802182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.811962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.812115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.812141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.812155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.812168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.812197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.822057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.822259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.822285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.822300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.822312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.822341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.832013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.832174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.832200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.832214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.832226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.832256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.842043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.842195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.842220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.842234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.842246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.842276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.852065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.852216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.852241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.852256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.852268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.852297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.862123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.862282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.862308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.862328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.862341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.862371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.872164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.872351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.872382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.872397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.872410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.872439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.882180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.882336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.882361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.882375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.882388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.882420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.892218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.892388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.892414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.892428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.892441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.892470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.902256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.902459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.902485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.902500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.902512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.902542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.912259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.912413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.912439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.912453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.912467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.912502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.922251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.922403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.922429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.922443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.922455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.922484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.932320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.932483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.932510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.932524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.932540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.932569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.942367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.942522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.860 [2024-07-14 04:50:37.942547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.860 [2024-07-14 04:50:37.942562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.860 [2024-07-14 04:50:37.942574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.860 [2024-07-14 04:50:37.942603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.860 qpair failed and we were unable to recover it. 00:34:17.860 [2024-07-14 04:50:37.952386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.860 [2024-07-14 04:50:37.952547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:37.952573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:37.952587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:37.952600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:37.952629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:17.861 [2024-07-14 04:50:37.962423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.861 [2024-07-14 04:50:37.962569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:37.962601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:37.962616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:37.962629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:37.962670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:17.861 [2024-07-14 04:50:37.972445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.861 [2024-07-14 04:50:37.972601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:37.972627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:37.972641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:37.972654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:37.972685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:17.861 [2024-07-14 04:50:37.982462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.861 [2024-07-14 04:50:37.982619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:37.982644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:37.982658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:37.982670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:37.982699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:17.861 [2024-07-14 04:50:37.992472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.861 [2024-07-14 04:50:37.992630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:37.992657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:37.992671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:37.992684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:37.992715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:17.861 [2024-07-14 04:50:38.002526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.861 [2024-07-14 04:50:38.002714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:38.002739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:38.002754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:38.002767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:38.002803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:17.861 [2024-07-14 04:50:38.012533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.861 [2024-07-14 04:50:38.012705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:38.012730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:38.012745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:38.012758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:38.012787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:17.861 [2024-07-14 04:50:38.022626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.861 [2024-07-14 04:50:38.022783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:38.022809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:38.022823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:38.022836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:38.022874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:17.861 [2024-07-14 04:50:38.032583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.861 [2024-07-14 04:50:38.032768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:38.032794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:38.032808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:38.032821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:38.032849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:17.861 [2024-07-14 04:50:38.042638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.861 [2024-07-14 04:50:38.042789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.861 [2024-07-14 04:50:38.042815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.861 [2024-07-14 04:50:38.042829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.861 [2024-07-14 04:50:38.042842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:17.861 [2024-07-14 04:50:38.042879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.861 qpair failed and we were unable to recover it. 00:34:18.121 [2024-07-14 04:50:38.052645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.121 [2024-07-14 04:50:38.052844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.121 [2024-07-14 04:50:38.052877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.121 [2024-07-14 04:50:38.052896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.121 [2024-07-14 04:50:38.052910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.121 [2024-07-14 04:50:38.052940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.121 qpair failed and we were unable to recover it. 00:34:18.121 [2024-07-14 04:50:38.062699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.121 [2024-07-14 04:50:38.062864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.121 [2024-07-14 04:50:38.062898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.121 [2024-07-14 04:50:38.062912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.121 [2024-07-14 04:50:38.062925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.121 [2024-07-14 04:50:38.062955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.121 qpair failed and we were unable to recover it. 00:34:18.121 [2024-07-14 04:50:38.072699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.121 [2024-07-14 04:50:38.072902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.121 [2024-07-14 04:50:38.072928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.121 [2024-07-14 04:50:38.072943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.121 [2024-07-14 04:50:38.072956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.072985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.082724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.082882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.082909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.082923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.082936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.082965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.092752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.092910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.092936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.092951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.092969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.092999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.102782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.102948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.102973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.102987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.103000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.103029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.112829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.112991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.113017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.113032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.113044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.113076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.122857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.123042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.123068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.123082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.123095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.123124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.132856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.133020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.133047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.133061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.133076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.133106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.142930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.143105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.143131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.143146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.143159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.143189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.152933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.153089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.153115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.153130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.153143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.153172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.163120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.163287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.163313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.163327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.163341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.163371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.173053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.173246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.173274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.173294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.173307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.173339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.183210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.183384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.183413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.183435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.183449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.183479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.193108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.193276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.193305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.193320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.193333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.193362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.203084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.203235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.203262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.203276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.203289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.203319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.213089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.122 [2024-07-14 04:50:38.213233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.122 [2024-07-14 04:50:38.213259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.122 [2024-07-14 04:50:38.213274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.122 [2024-07-14 04:50:38.213287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.122 [2024-07-14 04:50:38.213317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.122 qpair failed and we were unable to recover it. 00:34:18.122 [2024-07-14 04:50:38.223159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.123 [2024-07-14 04:50:38.223318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.123 [2024-07-14 04:50:38.223344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.123 [2024-07-14 04:50:38.223358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.123 [2024-07-14 04:50:38.223371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.123 [2024-07-14 04:50:38.223401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.123 qpair failed and we were unable to recover it. 00:34:18.123 [2024-07-14 04:50:38.233169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.123 [2024-07-14 04:50:38.233323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.123 [2024-07-14 04:50:38.233349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.123 [2024-07-14 04:50:38.233363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.123 [2024-07-14 04:50:38.233376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.123 [2024-07-14 04:50:38.233408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.123 qpair failed and we were unable to recover it. 00:34:18.123 [2024-07-14 04:50:38.243216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.123 [2024-07-14 04:50:38.243413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.123 [2024-07-14 04:50:38.243438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.123 [2024-07-14 04:50:38.243453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.123 [2024-07-14 04:50:38.243467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.123 [2024-07-14 04:50:38.243497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.123 qpair failed and we were unable to recover it. 00:34:18.123 [2024-07-14 04:50:38.253227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.123 [2024-07-14 04:50:38.253424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.123 [2024-07-14 04:50:38.253450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.123 [2024-07-14 04:50:38.253465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.123 [2024-07-14 04:50:38.253478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.123 [2024-07-14 04:50:38.253508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.123 qpair failed and we were unable to recover it. 00:34:18.123 [2024-07-14 04:50:38.263337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.123 [2024-07-14 04:50:38.263537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.123 [2024-07-14 04:50:38.263563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.123 [2024-07-14 04:50:38.263577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.123 [2024-07-14 04:50:38.263590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.123 [2024-07-14 04:50:38.263620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.123 qpair failed and we were unable to recover it. 00:34:18.123 [2024-07-14 04:50:38.273299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.123 [2024-07-14 04:50:38.273458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.123 [2024-07-14 04:50:38.273489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.123 [2024-07-14 04:50:38.273504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.123 [2024-07-14 04:50:38.273517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.123 [2024-07-14 04:50:38.273546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.123 qpair failed and we were unable to recover it. 00:34:18.123 [2024-07-14 04:50:38.283291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.123 [2024-07-14 04:50:38.283446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.123 [2024-07-14 04:50:38.283472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.123 [2024-07-14 04:50:38.283486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.123 [2024-07-14 04:50:38.283499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.123 [2024-07-14 04:50:38.283530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.123 qpair failed and we were unable to recover it. 00:34:18.123 [2024-07-14 04:50:38.293453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.123 [2024-07-14 04:50:38.293652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.123 [2024-07-14 04:50:38.293679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.123 [2024-07-14 04:50:38.293694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.123 [2024-07-14 04:50:38.293711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.123 [2024-07-14 04:50:38.293742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.123 qpair failed and we were unable to recover it. 00:34:18.123 [2024-07-14 04:50:38.303361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.123 [2024-07-14 04:50:38.303520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.123 [2024-07-14 04:50:38.303547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.123 [2024-07-14 04:50:38.303561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.123 [2024-07-14 04:50:38.303574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.123 [2024-07-14 04:50:38.303603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.123 qpair failed and we were unable to recover it. 00:34:18.384 [2024-07-14 04:50:38.313436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.384 [2024-07-14 04:50:38.313615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.384 [2024-07-14 04:50:38.313642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.384 [2024-07-14 04:50:38.313662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.384 [2024-07-14 04:50:38.313676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.384 [2024-07-14 04:50:38.313725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.384 qpair failed and we were unable to recover it. 00:34:18.384 [2024-07-14 04:50:38.323471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.384 [2024-07-14 04:50:38.323622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.384 [2024-07-14 04:50:38.323649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.384 [2024-07-14 04:50:38.323663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.384 [2024-07-14 04:50:38.323676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.384 [2024-07-14 04:50:38.323706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.384 qpair failed and we were unable to recover it. 00:34:18.384 [2024-07-14 04:50:38.333433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.384 [2024-07-14 04:50:38.333589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.384 [2024-07-14 04:50:38.333614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.384 [2024-07-14 04:50:38.333628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.384 [2024-07-14 04:50:38.333641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.384 [2024-07-14 04:50:38.333671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.384 qpair failed and we were unable to recover it. 00:34:18.384 [2024-07-14 04:50:38.343561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.384 [2024-07-14 04:50:38.343721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.384 [2024-07-14 04:50:38.343748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.384 [2024-07-14 04:50:38.343767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.384 [2024-07-14 04:50:38.343781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.384 [2024-07-14 04:50:38.343813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.384 qpair failed and we were unable to recover it. 00:34:18.384 [2024-07-14 04:50:38.353484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.384 [2024-07-14 04:50:38.353652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.384 [2024-07-14 04:50:38.353678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.384 [2024-07-14 04:50:38.353693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.384 [2024-07-14 04:50:38.353706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.384 [2024-07-14 04:50:38.353735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.384 qpair failed and we were unable to recover it. 00:34:18.384 [2024-07-14 04:50:38.363522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.384 [2024-07-14 04:50:38.363670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.384 [2024-07-14 04:50:38.363705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.384 [2024-07-14 04:50:38.363720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.384 [2024-07-14 04:50:38.363732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.384 [2024-07-14 04:50:38.363762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.384 qpair failed and we were unable to recover it. 00:34:18.384 [2024-07-14 04:50:38.373542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.384 [2024-07-14 04:50:38.373743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.384 [2024-07-14 04:50:38.373770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.384 [2024-07-14 04:50:38.373785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.384 [2024-07-14 04:50:38.373800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.384 [2024-07-14 04:50:38.373830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.384 qpair failed and we were unable to recover it. 00:34:18.384 [2024-07-14 04:50:38.383575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.384 [2024-07-14 04:50:38.383749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.383775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.383790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.383803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.383833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.393594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.393747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.393773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.393787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.393800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.393830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.403662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.403817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.403843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.403857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.403878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.403916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.413659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.413832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.413858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.413881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.413895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.413924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.423742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.423917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.423943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.423958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.423971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.424001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.433708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.433873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.433900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.433913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.433926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.433956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.443766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.443929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.443955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.443969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.443982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.444012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.453778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.453970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.454001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.454017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.454030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.454059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.463804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.463981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.464008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.464022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.464033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.464063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.473909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.474064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.474090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.474104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.474117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.474148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.483894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.484108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.484135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.484150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.484166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.484198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.493889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.494049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.494075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.494089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.494107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.494137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.503931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.504129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.504155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.504169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.504182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.504213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.513955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.514100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.514125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.514140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.385 [2024-07-14 04:50:38.514153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.385 [2024-07-14 04:50:38.514182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.385 qpair failed and we were unable to recover it. 00:34:18.385 [2024-07-14 04:50:38.523992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.385 [2024-07-14 04:50:38.524146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.385 [2024-07-14 04:50:38.524172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.385 [2024-07-14 04:50:38.524186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.386 [2024-07-14 04:50:38.524199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.386 [2024-07-14 04:50:38.524229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.386 qpair failed and we were unable to recover it. 00:34:18.386 [2024-07-14 04:50:38.534016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.386 [2024-07-14 04:50:38.534175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.386 [2024-07-14 04:50:38.534201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.386 [2024-07-14 04:50:38.534216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.386 [2024-07-14 04:50:38.534229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:18.386 [2024-07-14 04:50:38.534260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.386 qpair failed and we were unable to recover it. 00:34:18.386 [2024-07-14 04:50:38.544043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.386 [2024-07-14 04:50:38.544225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.386 [2024-07-14 04:50:38.544259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.386 [2024-07-14 04:50:38.544275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.386 [2024-07-14 04:50:38.544288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.386 [2024-07-14 04:50:38.544321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.386 qpair failed and we were unable to recover it. 00:34:18.386 [2024-07-14 04:50:38.554126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.386 [2024-07-14 04:50:38.554302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.386 [2024-07-14 04:50:38.554331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.386 [2024-07-14 04:50:38.554346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.386 [2024-07-14 04:50:38.554360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.386 [2024-07-14 04:50:38.554391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.386 qpair failed and we were unable to recover it. 00:34:18.386 [2024-07-14 04:50:38.564109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.386 [2024-07-14 04:50:38.564261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.386 [2024-07-14 04:50:38.564290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.386 [2024-07-14 04:50:38.564305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.386 [2024-07-14 04:50:38.564318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.386 [2024-07-14 04:50:38.564361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.386 qpair failed and we were unable to recover it. 00:34:18.386 [2024-07-14 04:50:38.574126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.386 [2024-07-14 04:50:38.574275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.386 [2024-07-14 04:50:38.574301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.386 [2024-07-14 04:50:38.574316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.386 [2024-07-14 04:50:38.574331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.386 [2024-07-14 04:50:38.574362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.386 qpair failed and we were unable to recover it. 00:34:18.646 [2024-07-14 04:50:38.584187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.646 [2024-07-14 04:50:38.584349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.646 [2024-07-14 04:50:38.584377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.646 [2024-07-14 04:50:38.584401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.646 [2024-07-14 04:50:38.584416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.646 [2024-07-14 04:50:38.584447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.646 qpair failed and we were unable to recover it. 00:34:18.646 [2024-07-14 04:50:38.594189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.646 [2024-07-14 04:50:38.594344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.646 [2024-07-14 04:50:38.594372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.646 [2024-07-14 04:50:38.594387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.646 [2024-07-14 04:50:38.594400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.646 [2024-07-14 04:50:38.594430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.646 qpair failed and we were unable to recover it. 00:34:18.646 [2024-07-14 04:50:38.604228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.646 [2024-07-14 04:50:38.604382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.646 [2024-07-14 04:50:38.604410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.646 [2024-07-14 04:50:38.604425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.646 [2024-07-14 04:50:38.604438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.646 [2024-07-14 04:50:38.604468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.646 qpair failed and we were unable to recover it. 00:34:18.646 [2024-07-14 04:50:38.614230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.646 [2024-07-14 04:50:38.614382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.646 [2024-07-14 04:50:38.614410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.646 [2024-07-14 04:50:38.614424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.646 [2024-07-14 04:50:38.614437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.646 [2024-07-14 04:50:38.614467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.646 qpair failed and we were unable to recover it. 00:34:18.646 [2024-07-14 04:50:38.624278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.646 [2024-07-14 04:50:38.624443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.646 [2024-07-14 04:50:38.624472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.646 [2024-07-14 04:50:38.624487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.646 [2024-07-14 04:50:38.624500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.646 [2024-07-14 04:50:38.624542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.646 qpair failed and we were unable to recover it. 00:34:18.646 [2024-07-14 04:50:38.634345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.646 [2024-07-14 04:50:38.634499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.646 [2024-07-14 04:50:38.634526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.646 [2024-07-14 04:50:38.634540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.646 [2024-07-14 04:50:38.634553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.646 [2024-07-14 04:50:38.634583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.646 qpair failed and we were unable to recover it. 00:34:18.646 [2024-07-14 04:50:38.644334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.646 [2024-07-14 04:50:38.644485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.646 [2024-07-14 04:50:38.644512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.646 [2024-07-14 04:50:38.644526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.646 [2024-07-14 04:50:38.644538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.646 [2024-07-14 04:50:38.644569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.646 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.654327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.654475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.654502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.654516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.654529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.654559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.664411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.664576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.664602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.664617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.664630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.664660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.674403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.674602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.674628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.674651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.674666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.674697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.684450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.684605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.684632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.684647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.684660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.684703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.694448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.694614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.694641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.694656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.694669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.694698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.704496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.704663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.704689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.704703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.704716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.704746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.714530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.714681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.714707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.714721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.714735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.714764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.724543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.724697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.724723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.724737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.724750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.724779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.734537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.734702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.734728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.734742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.734756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.734785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.744593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.744790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.744816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.744831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.744844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.744883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.754597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.754769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.754796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.754810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.754823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.754854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.764679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.764835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.764873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.764891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.764905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.764936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.774704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.774910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.774937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.774952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.774965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.774996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.784734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.647 [2024-07-14 04:50:38.784908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.647 [2024-07-14 04:50:38.784936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.647 [2024-07-14 04:50:38.784951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.647 [2024-07-14 04:50:38.784964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.647 [2024-07-14 04:50:38.784996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.647 qpair failed and we were unable to recover it. 00:34:18.647 [2024-07-14 04:50:38.794723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.648 [2024-07-14 04:50:38.794953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.648 [2024-07-14 04:50:38.794982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.648 [2024-07-14 04:50:38.795002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.648 [2024-07-14 04:50:38.795015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.648 [2024-07-14 04:50:38.795047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.648 qpair failed and we were unable to recover it. 00:34:18.648 [2024-07-14 04:50:38.804759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.648 [2024-07-14 04:50:38.804965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.648 [2024-07-14 04:50:38.804992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.648 [2024-07-14 04:50:38.805012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.648 [2024-07-14 04:50:38.805026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.648 [2024-07-14 04:50:38.805065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.648 qpair failed and we were unable to recover it. 00:34:18.648 [2024-07-14 04:50:38.814839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.648 [2024-07-14 04:50:38.815004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.648 [2024-07-14 04:50:38.815031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.648 [2024-07-14 04:50:38.815046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.648 [2024-07-14 04:50:38.815059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.648 [2024-07-14 04:50:38.815101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.648 qpair failed and we were unable to recover it. 00:34:18.648 [2024-07-14 04:50:38.824840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.648 [2024-07-14 04:50:38.825059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.648 [2024-07-14 04:50:38.825087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.648 [2024-07-14 04:50:38.825102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.648 [2024-07-14 04:50:38.825119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.648 [2024-07-14 04:50:38.825151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.648 qpair failed and we were unable to recover it. 00:34:18.648 [2024-07-14 04:50:38.834862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.648 [2024-07-14 04:50:38.835057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.648 [2024-07-14 04:50:38.835084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.648 [2024-07-14 04:50:38.835099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.648 [2024-07-14 04:50:38.835111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.648 [2024-07-14 04:50:38.835142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.648 qpair failed and we were unable to recover it. 00:34:18.907 [2024-07-14 04:50:38.844926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.907 [2024-07-14 04:50:38.845078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.907 [2024-07-14 04:50:38.845106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.907 [2024-07-14 04:50:38.845120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.907 [2024-07-14 04:50:38.845133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.907 [2024-07-14 04:50:38.845178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.907 qpair failed and we were unable to recover it. 00:34:18.907 [2024-07-14 04:50:38.854922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.907 [2024-07-14 04:50:38.855075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.907 [2024-07-14 04:50:38.855108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.907 [2024-07-14 04:50:38.855125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.907 [2024-07-14 04:50:38.855138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.907 [2024-07-14 04:50:38.855169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.907 qpair failed and we were unable to recover it. 00:34:18.907 [2024-07-14 04:50:38.864960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.907 [2024-07-14 04:50:38.865116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.907 [2024-07-14 04:50:38.865143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.907 [2024-07-14 04:50:38.865158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.907 [2024-07-14 04:50:38.865170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.907 [2024-07-14 04:50:38.865199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.907 qpair failed and we were unable to recover it. 00:34:18.907 [2024-07-14 04:50:38.874953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.907 [2024-07-14 04:50:38.875110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.907 [2024-07-14 04:50:38.875136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.907 [2024-07-14 04:50:38.875151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.907 [2024-07-14 04:50:38.875163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.875195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.884991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.885156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.885182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.885196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.885209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.885240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.895036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.895211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.895237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.895252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.895270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.895301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.905172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.905332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.905359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.905374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.905387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.905416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.915179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.915349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.915375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.915389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.915402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.915432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.925107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.925283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.925309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.925323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.925337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.925366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.935148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.935297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.935323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.935337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.935351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.935393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.945173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.945380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.945406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.945421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.945432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.945462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.955235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.955392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.955418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.955432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.955445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.955476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.965201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.965362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.965388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.965403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.965415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.965445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.975327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.975475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.975501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.975515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.975528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.975558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.985286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.985445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.985469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.985489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.985502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.985532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:38.995370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:38.995572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:38.995599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:38.995613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:38.995626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:38.995656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:39.005374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:39.005534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:39.005560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:39.005574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:39.005587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:39.005616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.908 [2024-07-14 04:50:39.015342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.908 [2024-07-14 04:50:39.015494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.908 [2024-07-14 04:50:39.015521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.908 [2024-07-14 04:50:39.015535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.908 [2024-07-14 04:50:39.015549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.908 [2024-07-14 04:50:39.015579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.908 qpair failed and we were unable to recover it. 00:34:18.909 [2024-07-14 04:50:39.025394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.909 [2024-07-14 04:50:39.025552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.909 [2024-07-14 04:50:39.025578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.909 [2024-07-14 04:50:39.025592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.909 [2024-07-14 04:50:39.025605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.909 [2024-07-14 04:50:39.025635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.909 qpair failed and we were unable to recover it. 00:34:18.909 [2024-07-14 04:50:39.035436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.909 [2024-07-14 04:50:39.035588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.909 [2024-07-14 04:50:39.035613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.909 [2024-07-14 04:50:39.035628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.909 [2024-07-14 04:50:39.035641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.909 [2024-07-14 04:50:39.035670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.909 qpair failed and we were unable to recover it. 00:34:18.909 [2024-07-14 04:50:39.045423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.909 [2024-07-14 04:50:39.045573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.909 [2024-07-14 04:50:39.045599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.909 [2024-07-14 04:50:39.045613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.909 [2024-07-14 04:50:39.045626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.909 [2024-07-14 04:50:39.045656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.909 qpair failed and we were unable to recover it. 00:34:18.909 [2024-07-14 04:50:39.055477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.909 [2024-07-14 04:50:39.055645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.909 [2024-07-14 04:50:39.055670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.909 [2024-07-14 04:50:39.055684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.909 [2024-07-14 04:50:39.055697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.909 [2024-07-14 04:50:39.055726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.909 qpair failed and we were unable to recover it. 00:34:18.909 [2024-07-14 04:50:39.065506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.909 [2024-07-14 04:50:39.065661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.909 [2024-07-14 04:50:39.065686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.909 [2024-07-14 04:50:39.065701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.909 [2024-07-14 04:50:39.065713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.909 [2024-07-14 04:50:39.065743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.909 qpair failed and we were unable to recover it. 00:34:18.909 [2024-07-14 04:50:39.075510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.909 [2024-07-14 04:50:39.075685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.909 [2024-07-14 04:50:39.075711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.909 [2024-07-14 04:50:39.075731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.909 [2024-07-14 04:50:39.075745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.909 [2024-07-14 04:50:39.075775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.909 qpair failed and we were unable to recover it. 00:34:18.909 [2024-07-14 04:50:39.085564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.909 [2024-07-14 04:50:39.085724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.909 [2024-07-14 04:50:39.085750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.909 [2024-07-14 04:50:39.085765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.909 [2024-07-14 04:50:39.085778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.909 [2024-07-14 04:50:39.085807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.909 qpair failed and we were unable to recover it. 00:34:18.909 [2024-07-14 04:50:39.095610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.909 [2024-07-14 04:50:39.095772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.909 [2024-07-14 04:50:39.095799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.909 [2024-07-14 04:50:39.095814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.909 [2024-07-14 04:50:39.095830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:18.909 [2024-07-14 04:50:39.095862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.909 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.105653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.105811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.105837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.105852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.105873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.105906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.115659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.115843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.115879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.115910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.115925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.115958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.125667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.125857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.125891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.125906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.125919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.125949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.135680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.135834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.135859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.135883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.135898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.135929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.145779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.145955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.145982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.145996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.146009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.146041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.155771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.155934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.155960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.155975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.155988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.156018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.165809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.166008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.166039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.166055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.166068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.166098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.175794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.175947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.175974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.175988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.176002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.176033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.185908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.186104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.186131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.186145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.186158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.186188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.195850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.196026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.196052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.196066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.196079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.196110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.205908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.169 [2024-07-14 04:50:39.206069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.169 [2024-07-14 04:50:39.206097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.169 [2024-07-14 04:50:39.206114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.169 [2024-07-14 04:50:39.206127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.169 [2024-07-14 04:50:39.206163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.169 qpair failed and we were unable to recover it. 00:34:19.169 [2024-07-14 04:50:39.215916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.216115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.216141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.216155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.216168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.216211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.225949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.226107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.226133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.226148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.226161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.226192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.236017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.236173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.236200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.236214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.236227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.236257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.246000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.246158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.246187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.246204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.246217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.246249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.256075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.256302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.256333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.256347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.256361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.256390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.266068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.266225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.266251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.266266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.266278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.266309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.276081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.276249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.276276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.276290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.276302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.276345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.286163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.286326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.286352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.286367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.286380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.286410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.296155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.296314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.296340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.296355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.296375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.296406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.306237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.306435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.306462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.306476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.306488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.306518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.316232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.316431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.316458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.316472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.316486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.316517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.326256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.326418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.326443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.326458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.326471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.326501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.336252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.336407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.336433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.336448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.336461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.336490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.346345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.346512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.346538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.170 [2024-07-14 04:50:39.346553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.170 [2024-07-14 04:50:39.346566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.170 [2024-07-14 04:50:39.346595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.170 qpair failed and we were unable to recover it. 00:34:19.170 [2024-07-14 04:50:39.356321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.170 [2024-07-14 04:50:39.356501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.170 [2024-07-14 04:50:39.356527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.171 [2024-07-14 04:50:39.356541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.171 [2024-07-14 04:50:39.356554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.171 [2024-07-14 04:50:39.356585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.171 qpair failed and we were unable to recover it. 00:34:19.430 [2024-07-14 04:50:39.366359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.430 [2024-07-14 04:50:39.366515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.430 [2024-07-14 04:50:39.366542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.430 [2024-07-14 04:50:39.366556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.430 [2024-07-14 04:50:39.366570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.430 [2024-07-14 04:50:39.366601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.430 qpair failed and we were unable to recover it. 00:34:19.430 [2024-07-14 04:50:39.376374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.430 [2024-07-14 04:50:39.376523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.430 [2024-07-14 04:50:39.376550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.430 [2024-07-14 04:50:39.376565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.430 [2024-07-14 04:50:39.376578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.430 [2024-07-14 04:50:39.376608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.430 qpair failed and we were unable to recover it. 00:34:19.430 [2024-07-14 04:50:39.386385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.430 [2024-07-14 04:50:39.386543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.430 [2024-07-14 04:50:39.386569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.430 [2024-07-14 04:50:39.386584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.430 [2024-07-14 04:50:39.386603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.430 [2024-07-14 04:50:39.386636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.430 qpair failed and we were unable to recover it. 00:34:19.430 [2024-07-14 04:50:39.396409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.430 [2024-07-14 04:50:39.396567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.430 [2024-07-14 04:50:39.396593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.430 [2024-07-14 04:50:39.396607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.430 [2024-07-14 04:50:39.396620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.430 [2024-07-14 04:50:39.396650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.430 qpair failed and we were unable to recover it. 00:34:19.430 [2024-07-14 04:50:39.406482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.430 [2024-07-14 04:50:39.406640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.430 [2024-07-14 04:50:39.406666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.430 [2024-07-14 04:50:39.406680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.430 [2024-07-14 04:50:39.406693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.430 [2024-07-14 04:50:39.406725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.430 qpair failed and we were unable to recover it. 00:34:19.430 [2024-07-14 04:50:39.416482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.430 [2024-07-14 04:50:39.416703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.430 [2024-07-14 04:50:39.416729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.430 [2024-07-14 04:50:39.416744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.430 [2024-07-14 04:50:39.416757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.430 [2024-07-14 04:50:39.416786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.430 qpair failed and we were unable to recover it. 00:34:19.430 [2024-07-14 04:50:39.426560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.430 [2024-07-14 04:50:39.426740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.430 [2024-07-14 04:50:39.426767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.430 [2024-07-14 04:50:39.426782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.430 [2024-07-14 04:50:39.426796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.430 [2024-07-14 04:50:39.426825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.430 qpair failed and we were unable to recover it. 00:34:19.430 [2024-07-14 04:50:39.436539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.430 [2024-07-14 04:50:39.436699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.430 [2024-07-14 04:50:39.436725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.430 [2024-07-14 04:50:39.436739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.430 [2024-07-14 04:50:39.436752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.430 [2024-07-14 04:50:39.436781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.430 qpair failed and we were unable to recover it. 00:34:19.430 [2024-07-14 04:50:39.446548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.430 [2024-07-14 04:50:39.446735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.430 [2024-07-14 04:50:39.446761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.430 [2024-07-14 04:50:39.446775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.430 [2024-07-14 04:50:39.446788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.430 [2024-07-14 04:50:39.446819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.456613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.456766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.456791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.456805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.456818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.456848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.466632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.466810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.466836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.466850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.466863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.466904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.476694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.476893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.476919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.476939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.476953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.476983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.486670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.486819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.486845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.486860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.486883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.486915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.496715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.496873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.496900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.496914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.496927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.496958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.506750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.506914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.506940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.506954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.506968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.507011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.516758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.516917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.516943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.516957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.516970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.517000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.526787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.526953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.526980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.526994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.527007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.527036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.536819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.536992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.537017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.537032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.537044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.537075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.546906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.547107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.547132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.547146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.547159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.547190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.556887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.557042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.557069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.557083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.557096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.557126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.566911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.567066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.567099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.567115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.567129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.567171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.576929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.577077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.577103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.577118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.577131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.577161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.586999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.431 [2024-07-14 04:50:39.587222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.431 [2024-07-14 04:50:39.587248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.431 [2024-07-14 04:50:39.587263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.431 [2024-07-14 04:50:39.587276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.431 [2024-07-14 04:50:39.587306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.431 qpair failed and we were unable to recover it. 00:34:19.431 [2024-07-14 04:50:39.597007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.432 [2024-07-14 04:50:39.597167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.432 [2024-07-14 04:50:39.597193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.432 [2024-07-14 04:50:39.597207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.432 [2024-07-14 04:50:39.597220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.432 [2024-07-14 04:50:39.597250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.432 qpair failed and we were unable to recover it. 00:34:19.432 [2024-07-14 04:50:39.607082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.432 [2024-07-14 04:50:39.607228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.432 [2024-07-14 04:50:39.607254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.432 [2024-07-14 04:50:39.607268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.432 [2024-07-14 04:50:39.607282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.432 [2024-07-14 04:50:39.607317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.432 qpair failed and we were unable to recover it. 00:34:19.432 [2024-07-14 04:50:39.617089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.432 [2024-07-14 04:50:39.617247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.432 [2024-07-14 04:50:39.617272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.432 [2024-07-14 04:50:39.617287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.432 [2024-07-14 04:50:39.617300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.432 [2024-07-14 04:50:39.617331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.432 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-14 04:50:39.627124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.692 [2024-07-14 04:50:39.627308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.692 [2024-07-14 04:50:39.627334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.692 [2024-07-14 04:50:39.627349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.692 [2024-07-14 04:50:39.627362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.692 [2024-07-14 04:50:39.627392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.692 qpair failed and we were unable to recover it. 00:34:19.692 [2024-07-14 04:50:39.637134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.692 [2024-07-14 04:50:39.637285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.692 [2024-07-14 04:50:39.637311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.692 [2024-07-14 04:50:39.637326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.692 [2024-07-14 04:50:39.637339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.692 [2024-07-14 04:50:39.637380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.647150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.647349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.647375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.647390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.647403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.647433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.657148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.657315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.657347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.657362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.657375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.657406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.667313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.667477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.667503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.667517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.667530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.667560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.677224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.677411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.677437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.677451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.677464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.677496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.687246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.687403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.687429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.687444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.687457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.687486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.697301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.697505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.697531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.697545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.697559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.697594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.707293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.707452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.707477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.707492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.707505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.707536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.717328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.717526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.717552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.717567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.717580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.717609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.727413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.727624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.727650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.727664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.727677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.727708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.737391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.737547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.737572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.737586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.737599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.737631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.747422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.693 [2024-07-14 04:50:39.747585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.693 [2024-07-14 04:50:39.747611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.693 [2024-07-14 04:50:39.747626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.693 [2024-07-14 04:50:39.747639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.693 [2024-07-14 04:50:39.747681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.693 qpair failed and we were unable to recover it. 00:34:19.693 [2024-07-14 04:50:39.757456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.757609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.757635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.757650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.757662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.757694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.767474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.767643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.767669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.767683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.767696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.767739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.777514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.777702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.777727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.777742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.777755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.777784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.787518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.787675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.787701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.787716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.787734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.787764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.797559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.797713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.797739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.797753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.797766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.797796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.807616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.807811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.807837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.807851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.807872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.807906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.817618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.817770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.817795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.817810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.817822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.817852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.827636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.827793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.827819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.827833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.827846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.827884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.837664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.837821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.837846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.837861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.837884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.837936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.847688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.847845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.847882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.847898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.847911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.847941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.857711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.694 [2024-07-14 04:50:39.857862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.694 [2024-07-14 04:50:39.857894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.694 [2024-07-14 04:50:39.857908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.694 [2024-07-14 04:50:39.857921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.694 [2024-07-14 04:50:39.857952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.694 qpair failed and we were unable to recover it. 00:34:19.694 [2024-07-14 04:50:39.867789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.695 [2024-07-14 04:50:39.867965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.695 [2024-07-14 04:50:39.867991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.695 [2024-07-14 04:50:39.868005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.695 [2024-07-14 04:50:39.868032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.695 [2024-07-14 04:50:39.868063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.695 [2024-07-14 04:50:39.877772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.695 [2024-07-14 04:50:39.877936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.695 [2024-07-14 04:50:39.877962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.695 [2024-07-14 04:50:39.877983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.695 [2024-07-14 04:50:39.877997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.695 [2024-07-14 04:50:39.878027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.695 qpair failed and we were unable to recover it. 00:34:19.956 [2024-07-14 04:50:39.887798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.956 [2024-07-14 04:50:39.887951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.956 [2024-07-14 04:50:39.887979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.956 [2024-07-14 04:50:39.887994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.956 [2024-07-14 04:50:39.888007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.956 [2024-07-14 04:50:39.888037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.956 qpair failed and we were unable to recover it. 00:34:19.956 [2024-07-14 04:50:39.897816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.956 [2024-07-14 04:50:39.897975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.956 [2024-07-14 04:50:39.898001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.956 [2024-07-14 04:50:39.898015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.956 [2024-07-14 04:50:39.898029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.956 [2024-07-14 04:50:39.898059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.956 qpair failed and we were unable to recover it. 00:34:19.956 [2024-07-14 04:50:39.907863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.956 [2024-07-14 04:50:39.908052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.956 [2024-07-14 04:50:39.908078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.956 [2024-07-14 04:50:39.908093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.956 [2024-07-14 04:50:39.908106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.956 [2024-07-14 04:50:39.908137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.956 qpair failed and we were unable to recover it. 00:34:19.956 [2024-07-14 04:50:39.917907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.956 [2024-07-14 04:50:39.918080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.956 [2024-07-14 04:50:39.918107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.956 [2024-07-14 04:50:39.918121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.956 [2024-07-14 04:50:39.918134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.956 [2024-07-14 04:50:39.918164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.956 qpair failed and we were unable to recover it. 00:34:19.956 [2024-07-14 04:50:39.927982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:39.928149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:39.928175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:39.928190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:39.928203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:39.928233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:39.937979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:39.938129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:39.938155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:39.938169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:39.938182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:39.938212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:39.948004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:39.948176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:39.948201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:39.948216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:39.948228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:39.948258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:39.958024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:39.958177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:39.958203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:39.958223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:39.958236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:39.958267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:39.968105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:39.968261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:39.968293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:39.968309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:39.968322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:39.968352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:39.978055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:39.978205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:39.978232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:39.978246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:39.978260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:39.978291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:39.988104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:39.988264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:39.988289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:39.988304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:39.988316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:39.988345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:39.998149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:39.998321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:39.998348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:39.998362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:39.998375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:39.998405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:40.008225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:40.008403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:40.008432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:40.008446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:40.008459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:40.008499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:40.018224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:40.018382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:40.018411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:40.018426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:40.018439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:40.018485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:40.028281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:40.028445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:40.028472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:40.028487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:40.028499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:40.028533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:40.038284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:40.038442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:40.038469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:40.038484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:40.038496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:40.038527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:40.048268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:40.048427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:40.048452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:40.048467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:40.048479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:40.048510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:40.058316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:40.058513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:40.058550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:40.058566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.957 [2024-07-14 04:50:40.058578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.957 [2024-07-14 04:50:40.058610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.957 qpair failed and we were unable to recover it. 00:34:19.957 [2024-07-14 04:50:40.068346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.957 [2024-07-14 04:50:40.068534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.957 [2024-07-14 04:50:40.068560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.957 [2024-07-14 04:50:40.068574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.958 [2024-07-14 04:50:40.068587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.958 [2024-07-14 04:50:40.068630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.958 qpair failed and we were unable to recover it. 00:34:19.958 [2024-07-14 04:50:40.078377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.958 [2024-07-14 04:50:40.078574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.958 [2024-07-14 04:50:40.078600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.958 [2024-07-14 04:50:40.078614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.958 [2024-07-14 04:50:40.078627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.958 [2024-07-14 04:50:40.078658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.958 qpair failed and we were unable to recover it. 00:34:19.958 [2024-07-14 04:50:40.088438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.958 [2024-07-14 04:50:40.088647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.958 [2024-07-14 04:50:40.088675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.958 [2024-07-14 04:50:40.088690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.958 [2024-07-14 04:50:40.088713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.958 [2024-07-14 04:50:40.088743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.958 qpair failed and we were unable to recover it. 00:34:19.958 [2024-07-14 04:50:40.098484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.958 [2024-07-14 04:50:40.098651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.958 [2024-07-14 04:50:40.098677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.958 [2024-07-14 04:50:40.098692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.958 [2024-07-14 04:50:40.098705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.958 [2024-07-14 04:50:40.098741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.958 qpair failed and we were unable to recover it. 00:34:19.958 [2024-07-14 04:50:40.108446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.958 [2024-07-14 04:50:40.108602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.958 [2024-07-14 04:50:40.108628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.958 [2024-07-14 04:50:40.108643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.958 [2024-07-14 04:50:40.108656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.958 [2024-07-14 04:50:40.108685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.958 qpair failed and we were unable to recover it. 00:34:19.958 [2024-07-14 04:50:40.118476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.958 [2024-07-14 04:50:40.118636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.958 [2024-07-14 04:50:40.118662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.958 [2024-07-14 04:50:40.118677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.958 [2024-07-14 04:50:40.118689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.958 [2024-07-14 04:50:40.118719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.958 qpair failed and we were unable to recover it. 00:34:19.958 [2024-07-14 04:50:40.128469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.958 [2024-07-14 04:50:40.128631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.958 [2024-07-14 04:50:40.128657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.958 [2024-07-14 04:50:40.128671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.958 [2024-07-14 04:50:40.128684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.958 [2024-07-14 04:50:40.128714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.958 qpair failed and we were unable to recover it. 00:34:19.958 [2024-07-14 04:50:40.138564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.958 [2024-07-14 04:50:40.138761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.958 [2024-07-14 04:50:40.138787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.958 [2024-07-14 04:50:40.138801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.958 [2024-07-14 04:50:40.138814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:19.958 [2024-07-14 04:50:40.138845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.958 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-14 04:50:40.148569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.219 [2024-07-14 04:50:40.148729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.219 [2024-07-14 04:50:40.148760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.219 [2024-07-14 04:50:40.148775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.219 [2024-07-14 04:50:40.148788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.219 [2024-07-14 04:50:40.148819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-14 04:50:40.158582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.219 [2024-07-14 04:50:40.158735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.219 [2024-07-14 04:50:40.158761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.219 [2024-07-14 04:50:40.158775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.219 [2024-07-14 04:50:40.158788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.219 [2024-07-14 04:50:40.158818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-14 04:50:40.168624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.219 [2024-07-14 04:50:40.168780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.219 [2024-07-14 04:50:40.168806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.168820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.168832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.168863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.178666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.178823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.178848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.178862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.178886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.178917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.188688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.188896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.188922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.188937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.188955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.188986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.198690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.198850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.198886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.198905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.198918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.198949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.208704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.208855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.208888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.208903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.208916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.208948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.218732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.218887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.218913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.218928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.218941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.218983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.228775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.228940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.228966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.228980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.228994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.229024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.238807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.238969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.238996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.239010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.239023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.239053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.248834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.248989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.249016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.249031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.249043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.249086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.258829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.258999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.259026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.259040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.259053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.259082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.268884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.269087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.269113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.269127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.269140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.269170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.278925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.279146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.279173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.279193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.279206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.279251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.288938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.289093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.289120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.289135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.220 [2024-07-14 04:50:40.289148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.220 [2024-07-14 04:50:40.289177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-14 04:50:40.298962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.220 [2024-07-14 04:50:40.299110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.220 [2024-07-14 04:50:40.299136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.220 [2024-07-14 04:50:40.299150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.299163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.299193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.309010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.309170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.309197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.309211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.309223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.309253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.319100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.319279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.319305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.319319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.319332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.319363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.329080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.329237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.329264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.329284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.329298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.329329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.339112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.339290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.339317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.339332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.339345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.339374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.349189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.349351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.349378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.349393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.349406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.349438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.359161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.359313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.359341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.359356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.359369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.359398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.369190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.369338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.369365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.369386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.369400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.369430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.379221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.379371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.379398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.379413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.379426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.379455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.389305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.389491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.389520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.389535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.389548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.389579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.399315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.399474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.399502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.399516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.399529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.399560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-14 04:50:40.409374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.221 [2024-07-14 04:50:40.409523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.221 [2024-07-14 04:50:40.409549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.221 [2024-07-14 04:50:40.409563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.221 [2024-07-14 04:50:40.409577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.221 [2024-07-14 04:50:40.409607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.483 [2024-07-14 04:50:40.419372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.419537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.419564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.419579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.419592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.419621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.429363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.429526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.429552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.429567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.429580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.429611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.439420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.439607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.439633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.439648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.439662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.439691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.449481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.449680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.449707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.449722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.449735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.449778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.459412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.459568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.459601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.459616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.459630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.459660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.469494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.469663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.469689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.469704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.469720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.469750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.479519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.479723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.479749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.479763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.479776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.479818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.489509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.489664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.489690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.489704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.489717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.489748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.499532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.499685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.499710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.499725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.499738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.499773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.509570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.509731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.509758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.509778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.509791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.509823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.519586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.519738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.519764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.519778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.519791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.519822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.529626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.529787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.529816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.529831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.529845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.529887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.539688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.539895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.539921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.539936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.539949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.539980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.549731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.549908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.484 [2024-07-14 04:50:40.549939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.484 [2024-07-14 04:50:40.549955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.484 [2024-07-14 04:50:40.549968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.484 [2024-07-14 04:50:40.549997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.484 qpair failed and we were unable to recover it. 00:34:20.484 [2024-07-14 04:50:40.559733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.484 [2024-07-14 04:50:40.559897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.559931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.559945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.559958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.559989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.569846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.570113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.570146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.570168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.570187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.570237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.579804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.579971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.579998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.580013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.580026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.580056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.589860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.590082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.590108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.590123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.590141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.590172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.599852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.600036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.600063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.600077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.600090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.600121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.609856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.610020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.610046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.610061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.610074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.610116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.619916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.620067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.620097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.620111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.620124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.620154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.629972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.630132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.630159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.630173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.630186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.630217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.640007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.640190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.640217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.640231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.640244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.640274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.649991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.650141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.650167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.650181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.650194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.650225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.660031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.660236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.660262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.660277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.660290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.660320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.485 [2024-07-14 04:50:40.670092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.485 [2024-07-14 04:50:40.670253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.485 [2024-07-14 04:50:40.670280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.485 [2024-07-14 04:50:40.670300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.485 [2024-07-14 04:50:40.670314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.485 [2024-07-14 04:50:40.670346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.485 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.680084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.680247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.680274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.680295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.680310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.680341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.690109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.690269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.690295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.690309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.690323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.690353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.700110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.700258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.700284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.700298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.700311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.700341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.710259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.710423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.710449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.710464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.710477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.710506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.720171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.720331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.720357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.720371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.720384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.720413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.730212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.730364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.730389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.730403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.730416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.730448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.740199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.740359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.740385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.740399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.740412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.740442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.750273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.750448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.750474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.750489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.750502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.750532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.760267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.760423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.760449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.760463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.760477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.760506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.770333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.770516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.770544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.770567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.770583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.770614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.780320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.780500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.780527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.780541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.780555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.780584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.790347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.790507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.790532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.790547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.790560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.790590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.800403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.746 [2024-07-14 04:50:40.800564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.746 [2024-07-14 04:50:40.800591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.746 [2024-07-14 04:50:40.800609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.746 [2024-07-14 04:50:40.800624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.746 [2024-07-14 04:50:40.800654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.746 qpair failed and we were unable to recover it. 00:34:20.746 [2024-07-14 04:50:40.810459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.810629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.810655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.810670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.810683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.810713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.820432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.820585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.820611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.820625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.820638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.820670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.830525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.830712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.830737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.830752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.830764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.830795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.840473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.840635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.840662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.840676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.840689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.840719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.850504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.850670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.850696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.850710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.850723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.850753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.860563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.860719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.860749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.860764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.860777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.860807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.870584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.870745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.870771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.870785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.870798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.870840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.880634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.880802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.880828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.880843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.880856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.880899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.890657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.890812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.890838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.890852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.890874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.890907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.900675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.900829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.900854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.900876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.900892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.900928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.910707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.910872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.910898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.910913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.910925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.910957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.920704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.920871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.920897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.920911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.920924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.920954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:20.747 [2024-07-14 04:50:40.930773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.747 [2024-07-14 04:50:40.930937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.747 [2024-07-14 04:50:40.930964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.747 [2024-07-14 04:50:40.930983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.747 [2024-07-14 04:50:40.930997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:20.747 [2024-07-14 04:50:40.931027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:20.747 qpair failed and we were unable to recover it. 00:34:21.007 [2024-07-14 04:50:40.940788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.007 [2024-07-14 04:50:40.940950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.007 [2024-07-14 04:50:40.940977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.007 [2024-07-14 04:50:40.940991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.007 [2024-07-14 04:50:40.941004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.007 [2024-07-14 04:50:40.941036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.007 qpair failed and we were unable to recover it. 00:34:21.007 [2024-07-14 04:50:40.950826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.007 [2024-07-14 04:50:40.950994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.007 [2024-07-14 04:50:40.951026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.007 [2024-07-14 04:50:40.951041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.007 [2024-07-14 04:50:40.951054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.007 [2024-07-14 04:50:40.951083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.007 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:40.960941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:40.961097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:40.961122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:40.961136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:40.961149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:40.961181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:40.970877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:40.971039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:40.971066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:40.971080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:40.971093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:40.971123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:40.980915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:40.981092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:40.981118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:40.981132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:40.981145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:40.981183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:40.990937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:40.991102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:40.991126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:40.991140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:40.991161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:40.991192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:41.000957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:41.001119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:41.001145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:41.001160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:41.001173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:41.001216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:41.010967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:41.011124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:41.011150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:41.011164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:41.011177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:41.011207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:41.021014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:41.021170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:41.021196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:41.021211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:41.021224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:41.021254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:41.031062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:41.031227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:41.031253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:41.031267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:41.031280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:41.031311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:41.041087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:41.041248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:41.041274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:41.041288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:41.041301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:41.041331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:41.051150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:41.051334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:41.051360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:41.051374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:41.051387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.008 [2024-07-14 04:50:41.051416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.008 qpair failed and we were unable to recover it. 00:34:21.008 [2024-07-14 04:50:41.061099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.008 [2024-07-14 04:50:41.061257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.008 [2024-07-14 04:50:41.061283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.008 [2024-07-14 04:50:41.061297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.008 [2024-07-14 04:50:41.061310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.061342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.071187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.071361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.071387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.071402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.071414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.071443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.081188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.081343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.081370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.081385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.081403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.081434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.091201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.091367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.091393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.091407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.091421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.091451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.101228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.101421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.101450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.101466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.101479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.101522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.111261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.111418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.111444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.111459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.111472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.111502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.121283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.121441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.121467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.121481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.121495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.121526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.131343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.131504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.131530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.131545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.131557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.131587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.141348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.141506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.141532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.141547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.141560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.141591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.151396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.151560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.151586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.151601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.151614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.151646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.161433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.161616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.161643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.161657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.009 [2024-07-14 04:50:41.161673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.009 [2024-07-14 04:50:41.161704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.009 qpair failed and we were unable to recover it. 00:34:21.009 [2024-07-14 04:50:41.171473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.009 [2024-07-14 04:50:41.171632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.009 [2024-07-14 04:50:41.171659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.009 [2024-07-14 04:50:41.171680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.010 [2024-07-14 04:50:41.171694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.010 [2024-07-14 04:50:41.171724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.010 qpair failed and we were unable to recover it. 00:34:21.010 [2024-07-14 04:50:41.181447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.010 [2024-07-14 04:50:41.181600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.010 [2024-07-14 04:50:41.181627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.010 [2024-07-14 04:50:41.181651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.010 [2024-07-14 04:50:41.181667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.010 [2024-07-14 04:50:41.181698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.010 qpair failed and we were unable to recover it. 00:34:21.010 [2024-07-14 04:50:41.191514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.010 [2024-07-14 04:50:41.191688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.010 [2024-07-14 04:50:41.191715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.010 [2024-07-14 04:50:41.191729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.010 [2024-07-14 04:50:41.191744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.010 [2024-07-14 04:50:41.191774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.010 qpair failed and we were unable to recover it. 00:34:21.269 [2024-07-14 04:50:41.201536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.269 [2024-07-14 04:50:41.201696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.269 [2024-07-14 04:50:41.201723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.269 [2024-07-14 04:50:41.201737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.269 [2024-07-14 04:50:41.201750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.269 [2024-07-14 04:50:41.201779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.269 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.211639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.211835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.211860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.211887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.211901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.211931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.221560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.221751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.221778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.221792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.221805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.221835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.231603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.231762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.231787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.231802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.231815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.231844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.241642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.241797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.241822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.241836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.241849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.241887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.251655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.251849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.251885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.251901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.251914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.251957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.261713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.261881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.261912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.261928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.261940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.261971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.271725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.271909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.271935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.271949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.271963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.271993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.281745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.281913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.281939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.281954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.281967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.281997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.291890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.292067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.292093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.292107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.292120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.292152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.301811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.301963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.301989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.302003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.302016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.302052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.311842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.270 [2024-07-14 04:50:41.312007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.270 [2024-07-14 04:50:41.312034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.270 [2024-07-14 04:50:41.312048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.270 [2024-07-14 04:50:41.312061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.270 [2024-07-14 04:50:41.312093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.270 qpair failed and we were unable to recover it. 00:34:21.270 [2024-07-14 04:50:41.321862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.322028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.322054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.322068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.322081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.322111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.331968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.332142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.332169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.332190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.332204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.332235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.341984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.342171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.342198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.342213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.342226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.342257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.351947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.352102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.352133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.352148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.352161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.352191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.362008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.362166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.362192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.362206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.362219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.362250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.372026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.372174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.372201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.372215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.372228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.372258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.382092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.382273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.382299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.382314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.382327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.382357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.392066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.392217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.392243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.392257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.392270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.392305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.402112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.402268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.402294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.402308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.402321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.402351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.412180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.412337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.412362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.412377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.412390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.412420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.422137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.422298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.271 [2024-07-14 04:50:41.422324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.271 [2024-07-14 04:50:41.422339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.271 [2024-07-14 04:50:41.422352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.271 [2024-07-14 04:50:41.422382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.271 qpair failed and we were unable to recover it. 00:34:21.271 [2024-07-14 04:50:41.432183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.271 [2024-07-14 04:50:41.432353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.272 [2024-07-14 04:50:41.432378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.272 [2024-07-14 04:50:41.432392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.272 [2024-07-14 04:50:41.432405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.272 [2024-07-14 04:50:41.432436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.272 qpair failed and we were unable to recover it. 00:34:21.272 [2024-07-14 04:50:41.442215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.272 [2024-07-14 04:50:41.442374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.272 [2024-07-14 04:50:41.442400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.272 [2024-07-14 04:50:41.442414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.272 [2024-07-14 04:50:41.442427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.272 [2024-07-14 04:50:41.442457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.272 qpair failed and we were unable to recover it. 00:34:21.272 [2024-07-14 04:50:41.452243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.272 [2024-07-14 04:50:41.452392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.272 [2024-07-14 04:50:41.452418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.272 [2024-07-14 04:50:41.452432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.272 [2024-07-14 04:50:41.452445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.272 [2024-07-14 04:50:41.452476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.272 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.462254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.462424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.462450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.462469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.462483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.462514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.472333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.472517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.472543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.472557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.472570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.472600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.482304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.482453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.482479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.482493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.482512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.482542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.492353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.492512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.492538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.492553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.492566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.492595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.502387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.502546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.502572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.502587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.502600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.502643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.512378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.512532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.512558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.512573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.512585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.512615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.522391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.522559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.522586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.522600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.522613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.522642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.532461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.532610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.532636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.532651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.532664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.532693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.542457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.542608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.542634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.542648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.542661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.542703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.552512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.552669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.552695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.532 [2024-07-14 04:50:41.552709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.532 [2024-07-14 04:50:41.552721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.532 [2024-07-14 04:50:41.552751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.532 qpair failed and we were unable to recover it. 00:34:21.532 [2024-07-14 04:50:41.562551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.532 [2024-07-14 04:50:41.562710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.532 [2024-07-14 04:50:41.562736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.562750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.562763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.562794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.572643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.572795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.572821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.572842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.572855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.572899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.582628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.582832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.582858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.582880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.582894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.582924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.592630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.592788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.592814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.592829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.592842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.592879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.602623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.602777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.602803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.602817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.602830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.602861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.612666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.612818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.612844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.612858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.612881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.612913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.622678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.622831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.622857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.622882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.622897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.622927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.632731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.632893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.632919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.632934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.632947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.632978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.642760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.642923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.642949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.642964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.642977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.643008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.652793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.652965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.652991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.653005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.653018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.653049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.662813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.662968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.662999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.663014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.663028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.663070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.672843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.673022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.673048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.673063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.673076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.673106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.682901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.683078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.683105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.683119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.683133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.683163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.692918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.693108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.693134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.693155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.693167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.533 [2024-07-14 04:50:41.693198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.533 qpair failed and we were unable to recover it. 00:34:21.533 [2024-07-14 04:50:41.702933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.533 [2024-07-14 04:50:41.703104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.533 [2024-07-14 04:50:41.703130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.533 [2024-07-14 04:50:41.703154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.533 [2024-07-14 04:50:41.703167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.534 [2024-07-14 04:50:41.703205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.534 qpair failed and we were unable to recover it. 00:34:21.534 [2024-07-14 04:50:41.712962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.534 [2024-07-14 04:50:41.713115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.534 [2024-07-14 04:50:41.713140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.534 [2024-07-14 04:50:41.713157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.534 [2024-07-14 04:50:41.713170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.534 [2024-07-14 04:50:41.713200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.534 qpair failed and we were unable to recover it. 00:34:21.794 [2024-07-14 04:50:41.723021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.794 [2024-07-14 04:50:41.723198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.794 [2024-07-14 04:50:41.723224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.794 [2024-07-14 04:50:41.723239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.794 [2024-07-14 04:50:41.723252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.794 [2024-07-14 04:50:41.723281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.794 qpair failed and we were unable to recover it. 00:34:21.794 [2024-07-14 04:50:41.733095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.794 [2024-07-14 04:50:41.733249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.794 [2024-07-14 04:50:41.733274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.794 [2024-07-14 04:50:41.733289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.794 [2024-07-14 04:50:41.733302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.794 [2024-07-14 04:50:41.733331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.794 qpair failed and we were unable to recover it. 00:34:21.794 [2024-07-14 04:50:41.743091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.794 [2024-07-14 04:50:41.743262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.794 [2024-07-14 04:50:41.743288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.794 [2024-07-14 04:50:41.743302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.794 [2024-07-14 04:50:41.743315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.794 [2024-07-14 04:50:41.743345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.794 qpair failed and we were unable to recover it. 00:34:21.794 [2024-07-14 04:50:41.753099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.794 [2024-07-14 04:50:41.753255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.794 [2024-07-14 04:50:41.753290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.794 [2024-07-14 04:50:41.753305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.794 [2024-07-14 04:50:41.753318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.794 [2024-07-14 04:50:41.753347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.794 qpair failed and we were unable to recover it. 00:34:21.794 [2024-07-14 04:50:41.763090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.794 [2024-07-14 04:50:41.763268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.794 [2024-07-14 04:50:41.763294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.794 [2024-07-14 04:50:41.763308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.794 [2024-07-14 04:50:41.763321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.794 [2024-07-14 04:50:41.763353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.794 qpair failed and we were unable to recover it. 00:34:21.794 [2024-07-14 04:50:41.773108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.794 [2024-07-14 04:50:41.773265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.794 [2024-07-14 04:50:41.773290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.794 [2024-07-14 04:50:41.773304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.794 [2024-07-14 04:50:41.773317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.794 [2024-07-14 04:50:41.773347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.783290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.783462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.783488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.783502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.783516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.783545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.793180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.793351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.793377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.793391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.793404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.793440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.803250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.803421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.803447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.803462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.803475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.803505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.813309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.813490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.813515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.813530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.813543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.813572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.823264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.823418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.823444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.823458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.823472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.823503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.833288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.833449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.833474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.833488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.833501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.833533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.843325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.843476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.843507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.843522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.843535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.843564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.853363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.853536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.853562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.853576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.853589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.853618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.863392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.863545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.863571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.863586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.863598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.863628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.873513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.873677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.873702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.873716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.873729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.873760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.883511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.883674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.883701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.883716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.883734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.883778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.893472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.893619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.893646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.893661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.893674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.893703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.903555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.903702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.903728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.903742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.903755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.903787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.913516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.795 [2024-07-14 04:50:41.913679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.795 [2024-07-14 04:50:41.913705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.795 [2024-07-14 04:50:41.913719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.795 [2024-07-14 04:50:41.913732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.795 [2024-07-14 04:50:41.913775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.795 qpair failed and we were unable to recover it. 00:34:21.795 [2024-07-14 04:50:41.923542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.796 [2024-07-14 04:50:41.923696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.796 [2024-07-14 04:50:41.923722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.796 [2024-07-14 04:50:41.923736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.796 [2024-07-14 04:50:41.923749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.796 [2024-07-14 04:50:41.923792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.796 qpair failed and we were unable to recover it. 00:34:21.796 [2024-07-14 04:50:41.933553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.796 [2024-07-14 04:50:41.933750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.796 [2024-07-14 04:50:41.933777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.796 [2024-07-14 04:50:41.933791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.796 [2024-07-14 04:50:41.933804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.796 [2024-07-14 04:50:41.933835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.796 qpair failed and we were unable to recover it. 00:34:21.796 [2024-07-14 04:50:41.943589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.796 [2024-07-14 04:50:41.943742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.796 [2024-07-14 04:50:41.943768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.796 [2024-07-14 04:50:41.943782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.796 [2024-07-14 04:50:41.943795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.796 [2024-07-14 04:50:41.943825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.796 qpair failed and we were unable to recover it. 00:34:21.796 [2024-07-14 04:50:41.953645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.796 [2024-07-14 04:50:41.953799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.796 [2024-07-14 04:50:41.953825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.796 [2024-07-14 04:50:41.953839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.796 [2024-07-14 04:50:41.953851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.796 [2024-07-14 04:50:41.953888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.796 qpair failed and we were unable to recover it. 00:34:21.796 [2024-07-14 04:50:41.963648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.796 [2024-07-14 04:50:41.963800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.796 [2024-07-14 04:50:41.963826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.796 [2024-07-14 04:50:41.963840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.796 [2024-07-14 04:50:41.963854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.796 [2024-07-14 04:50:41.963892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.796 qpair failed and we were unable to recover it. 00:34:21.796 [2024-07-14 04:50:41.973667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.796 [2024-07-14 04:50:41.973823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.796 [2024-07-14 04:50:41.973848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.796 [2024-07-14 04:50:41.973875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.796 [2024-07-14 04:50:41.973891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.796 [2024-07-14 04:50:41.973921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.796 qpair failed and we were unable to recover it. 00:34:21.796 [2024-07-14 04:50:41.983710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.796 [2024-07-14 04:50:41.983861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.796 [2024-07-14 04:50:41.983898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.796 [2024-07-14 04:50:41.983914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.796 [2024-07-14 04:50:41.983926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:21.796 [2024-07-14 04:50:41.983969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.796 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:41.993743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:41.993967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:41.993993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:41.994008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:41.994021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:41.994051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.003760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.003916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.003942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.003957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.003969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.004000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.013798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.013987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.014013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.014028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.014041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.014070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.023835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.024028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.024054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.024069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.024082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.024113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.033875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.034046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.034072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.034086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.034099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.034129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.043899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.044087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.044113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.044127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.044140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.044171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.053928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.054082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.054108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.054122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.054135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.054165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.063918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.064068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.064093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.064114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.064128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.064158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.073986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.074157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.074183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.074197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.074210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.074240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.084062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.084242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.084269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.084283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.084297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.084326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.094056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.094240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.094266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.094280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.094293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.094324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.104040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.104193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.104219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.104233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.104247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.057 [2024-07-14 04:50:42.104277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.057 qpair failed and we were unable to recover it. 00:34:22.057 [2024-07-14 04:50:42.114084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.057 [2024-07-14 04:50:42.114280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.057 [2024-07-14 04:50:42.114306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.057 [2024-07-14 04:50:42.114321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.057 [2024-07-14 04:50:42.114334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.114363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.124104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.124254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.124280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.124294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.124307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.124350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.134182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.134340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.134366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.134384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.134397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.134427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.144180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.144349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.144375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.144389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.144403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.144433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.154209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.154372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.154403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.154419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.154432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.154461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.164220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.164372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.164399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.164413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.164426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.164456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.174242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.174395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.174421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.174436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.174449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.174480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.184338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.184487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.184513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.184528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.184541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.184571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.194311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.194473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.194498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.194513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.194526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.194574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.204370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.204522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.204548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.204563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.204576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.204605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.214345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.214501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.214527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.214541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.214554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.214585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.224400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.224553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.224578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.224593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.224606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.224636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.234407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.234560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.234585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.234599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.234611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.234643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.058 [2024-07-14 04:50:42.244435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.058 [2024-07-14 04:50:42.244586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.058 [2024-07-14 04:50:42.244618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.058 [2024-07-14 04:50:42.244632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.058 [2024-07-14 04:50:42.244645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.058 [2024-07-14 04:50:42.244675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.058 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-14 04:50:42.254492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.320 [2024-07-14 04:50:42.254656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.320 [2024-07-14 04:50:42.254682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.320 [2024-07-14 04:50:42.254697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.320 [2024-07-14 04:50:42.254711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.320 [2024-07-14 04:50:42.254740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-14 04:50:42.264483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.320 [2024-07-14 04:50:42.264652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.320 [2024-07-14 04:50:42.264679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.320 [2024-07-14 04:50:42.264693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.320 [2024-07-14 04:50:42.264706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.320 [2024-07-14 04:50:42.264738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-14 04:50:42.274581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.320 [2024-07-14 04:50:42.274781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.320 [2024-07-14 04:50:42.274807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.320 [2024-07-14 04:50:42.274822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.320 [2024-07-14 04:50:42.274835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.320 [2024-07-14 04:50:42.274872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-14 04:50:42.284569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.320 [2024-07-14 04:50:42.284736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.320 [2024-07-14 04:50:42.284763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.320 [2024-07-14 04:50:42.284778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.320 [2024-07-14 04:50:42.284797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.320 [2024-07-14 04:50:42.284828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-14 04:50:42.294608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.294767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.294794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.294808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.294822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.294851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.304604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.304753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.304779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.304794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.304807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.304837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.314696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.314855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.314889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.314904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.314917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.314947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.324715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.324893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.324918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.324933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.324946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.324975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.334686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.334841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.334873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.334891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.334905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.334936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.344752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.344912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.344938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.344953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.344966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.344996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.354768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.354938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.354963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.354978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.354990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.355022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.364771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.364924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.364950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.364964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.364977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.365007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.374832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.375013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.375040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.375060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.375074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.375106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.384875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.385039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.385065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.385080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.385093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.385124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.394878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.395048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.395074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.395089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.395102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.395133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.404982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.405179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.405206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.405226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.405240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.405271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.414935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.415097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.415124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.415138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.415151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.415194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.424972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.425169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.425195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.425210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.321 [2024-07-14 04:50:42.425223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.321 [2024-07-14 04:50:42.425253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-14 04:50:42.435027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.321 [2024-07-14 04:50:42.435229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.321 [2024-07-14 04:50:42.435255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.321 [2024-07-14 04:50:42.435270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.322 [2024-07-14 04:50:42.435283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.322 [2024-07-14 04:50:42.435313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-14 04:50:42.445039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.322 [2024-07-14 04:50:42.445195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.322 [2024-07-14 04:50:42.445222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.322 [2024-07-14 04:50:42.445236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.322 [2024-07-14 04:50:42.445249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.322 [2024-07-14 04:50:42.445279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-14 04:50:42.455047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.322 [2024-07-14 04:50:42.455212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.322 [2024-07-14 04:50:42.455238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.322 [2024-07-14 04:50:42.455253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.322 [2024-07-14 04:50:42.455266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.322 [2024-07-14 04:50:42.455296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-14 04:50:42.465043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.322 [2024-07-14 04:50:42.465211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.322 [2024-07-14 04:50:42.465237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.322 [2024-07-14 04:50:42.465259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.322 [2024-07-14 04:50:42.465273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.322 [2024-07-14 04:50:42.465303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-14 04:50:42.475137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.322 [2024-07-14 04:50:42.475296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.322 [2024-07-14 04:50:42.475322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.322 [2024-07-14 04:50:42.475336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.322 [2024-07-14 04:50:42.475350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.322 [2024-07-14 04:50:42.475379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-14 04:50:42.485180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.322 [2024-07-14 04:50:42.485342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.322 [2024-07-14 04:50:42.485369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.322 [2024-07-14 04:50:42.485385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.322 [2024-07-14 04:50:42.485401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.322 [2024-07-14 04:50:42.485432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-14 04:50:42.495182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.322 [2024-07-14 04:50:42.495375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.322 [2024-07-14 04:50:42.495403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.322 [2024-07-14 04:50:42.495422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.322 [2024-07-14 04:50:42.495436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.322 [2024-07-14 04:50:42.495466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-14 04:50:42.505192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.322 [2024-07-14 04:50:42.505339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.322 [2024-07-14 04:50:42.505366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.322 [2024-07-14 04:50:42.505381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.322 [2024-07-14 04:50:42.505394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.322 [2024-07-14 04:50:42.505425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-14 04:50:42.515209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.582 [2024-07-14 04:50:42.515374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.582 [2024-07-14 04:50:42.515401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.582 [2024-07-14 04:50:42.515415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.582 [2024-07-14 04:50:42.515428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.582 [2024-07-14 04:50:42.515458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-14 04:50:42.525228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.582 [2024-07-14 04:50:42.525404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.582 [2024-07-14 04:50:42.525430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.582 [2024-07-14 04:50:42.525444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.582 [2024-07-14 04:50:42.525457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.582 [2024-07-14 04:50:42.525487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-14 04:50:42.535262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.582 [2024-07-14 04:50:42.535412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.582 [2024-07-14 04:50:42.535438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.582 [2024-07-14 04:50:42.535452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.582 [2024-07-14 04:50:42.535465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.582 [2024-07-14 04:50:42.535495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-14 04:50:42.545274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.582 [2024-07-14 04:50:42.545430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.582 [2024-07-14 04:50:42.545456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.582 [2024-07-14 04:50:42.545471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.582 [2024-07-14 04:50:42.545484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.582 [2024-07-14 04:50:42.545515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-14 04:50:42.555410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.582 [2024-07-14 04:50:42.555569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.582 [2024-07-14 04:50:42.555600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.555616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.555628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.555658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.565326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.565471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.565497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.565512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.565525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.565554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.575361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.575519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.575546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.575560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.575572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.575603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.585423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.585574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.585601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.585615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.585628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.585658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.595430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.595596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.595621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.595636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.595649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.595685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.605553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.605705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.605731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.605746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.605759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.605790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.615505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.615656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.615682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.615696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.615709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.615738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.625508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.625649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.625675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.625690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.625703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.625733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.635588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.635751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.635776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.635790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.635803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.635844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.645713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.645901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.645932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.645948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.645960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.645990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.655668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.655835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.655861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.655883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.655896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.655928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.665624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.665778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.665804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.583 [2024-07-14 04:50:42.665818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.583 [2024-07-14 04:50:42.665831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.583 [2024-07-14 04:50:42.665860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-14 04:50:42.675685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.583 [2024-07-14 04:50:42.675841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.583 [2024-07-14 04:50:42.675875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.675893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.675907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.675937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-14 04:50:42.685714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.584 [2024-07-14 04:50:42.685905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.584 [2024-07-14 04:50:42.685932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.685947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.685965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.686008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-14 04:50:42.695733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.584 [2024-07-14 04:50:42.695964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.584 [2024-07-14 04:50:42.695993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.696008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.696021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.696052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-14 04:50:42.705776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.584 [2024-07-14 04:50:42.705933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.584 [2024-07-14 04:50:42.705960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.705974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.705987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.706017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-14 04:50:42.715786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.584 [2024-07-14 04:50:42.715955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.584 [2024-07-14 04:50:42.715980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.715994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.716007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.716039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-14 04:50:42.725822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.584 [2024-07-14 04:50:42.725986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.584 [2024-07-14 04:50:42.726012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.726027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.726040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.726069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-14 04:50:42.735828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.584 [2024-07-14 04:50:42.735991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.584 [2024-07-14 04:50:42.736017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.736031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.736044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.736074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-14 04:50:42.745860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.584 [2024-07-14 04:50:42.746027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.584 [2024-07-14 04:50:42.746054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.746069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.746083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.746113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-14 04:50:42.755932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.584 [2024-07-14 04:50:42.756104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.584 [2024-07-14 04:50:42.756130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.756144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.756157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.756187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-14 04:50:42.765978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.584 [2024-07-14 04:50:42.766136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.584 [2024-07-14 04:50:42.766162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.584 [2024-07-14 04:50:42.766177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.584 [2024-07-14 04:50:42.766190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.584 [2024-07-14 04:50:42.766219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-14 04:50:42.775957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.844 [2024-07-14 04:50:42.776109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.844 [2024-07-14 04:50:42.776136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.844 [2024-07-14 04:50:42.776155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.844 [2024-07-14 04:50:42.776174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.844 [2024-07-14 04:50:42.776206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-14 04:50:42.785970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.844 [2024-07-14 04:50:42.786120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.844 [2024-07-14 04:50:42.786147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.844 [2024-07-14 04:50:42.786162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.844 [2024-07-14 04:50:42.786175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.844 [2024-07-14 04:50:42.786219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-14 04:50:42.796036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.844 [2024-07-14 04:50:42.796190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.844 [2024-07-14 04:50:42.796217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.844 [2024-07-14 04:50:42.796231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.844 [2024-07-14 04:50:42.796244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.844 [2024-07-14 04:50:42.796274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-14 04:50:42.806086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.844 [2024-07-14 04:50:42.806244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.844 [2024-07-14 04:50:42.806270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.844 [2024-07-14 04:50:42.806285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.844 [2024-07-14 04:50:42.806299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.844 [2024-07-14 04:50:42.806328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-14 04:50:42.816073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.844 [2024-07-14 04:50:42.816226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.844 [2024-07-14 04:50:42.816252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.844 [2024-07-14 04:50:42.816266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.844 [2024-07-14 04:50:42.816279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.844 [2024-07-14 04:50:42.816309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-14 04:50:42.826142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.844 [2024-07-14 04:50:42.826294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.844 [2024-07-14 04:50:42.826320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.844 [2024-07-14 04:50:42.826335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.844 [2024-07-14 04:50:42.826348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.844 [2024-07-14 04:50:42.826377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-14 04:50:42.836120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.844 [2024-07-14 04:50:42.836286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.844 [2024-07-14 04:50:42.836312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.844 [2024-07-14 04:50:42.836327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.844 [2024-07-14 04:50:42.836340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.844 [2024-07-14 04:50:42.836369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-14 04:50:42.846163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.844 [2024-07-14 04:50:42.846325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.844 [2024-07-14 04:50:42.846351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.844 [2024-07-14 04:50:42.846365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.844 [2024-07-14 04:50:42.846379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.844 [2024-07-14 04:50:42.846409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-14 04:50:42.856176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.844 [2024-07-14 04:50:42.856336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.844 [2024-07-14 04:50:42.856363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.856377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.856390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.856422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.866231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.866382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.866408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.866429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.866444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.866474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.876269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.876430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.876455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.876470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.876483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.876514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.886256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.886411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.886438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.886452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.886465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.886494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.896281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.896435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.896460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.896475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.896488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.896519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.906321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.906469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.906494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.906508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.906521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.906550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.916382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.916538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.916563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.916577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.916590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.916619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.926391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.926546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.926571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.926585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.926598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.926630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.936417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.936578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.936604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.936619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.936632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.936663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.946425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.946596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.946622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.946636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.946649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.946692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.956477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.956631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.956662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.956677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.956689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.956719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.966498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.845 [2024-07-14 04:50:42.966656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.845 [2024-07-14 04:50:42.966682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.845 [2024-07-14 04:50:42.966696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.845 [2024-07-14 04:50:42.966709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.845 [2024-07-14 04:50:42.966741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-14 04:50:42.976595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.846 [2024-07-14 04:50:42.976745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.846 [2024-07-14 04:50:42.976771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.846 [2024-07-14 04:50:42.976786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.846 [2024-07-14 04:50:42.976799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.846 [2024-07-14 04:50:42.976829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-14 04:50:42.986560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.846 [2024-07-14 04:50:42.986712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.846 [2024-07-14 04:50:42.986739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.846 [2024-07-14 04:50:42.986754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.846 [2024-07-14 04:50:42.986767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.846 [2024-07-14 04:50:42.986797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-14 04:50:42.996644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.846 [2024-07-14 04:50:42.996875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.846 [2024-07-14 04:50:42.996905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.846 [2024-07-14 04:50:42.996922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.846 [2024-07-14 04:50:42.996935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.846 [2024-07-14 04:50:42.996972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-14 04:50:43.006594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.846 [2024-07-14 04:50:43.006747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.846 [2024-07-14 04:50:43.006773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.846 [2024-07-14 04:50:43.006790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.846 [2024-07-14 04:50:43.006803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.846 [2024-07-14 04:50:43.006834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-14 04:50:43.016709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.846 [2024-07-14 04:50:43.016860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.846 [2024-07-14 04:50:43.016894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.846 [2024-07-14 04:50:43.016909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.846 [2024-07-14 04:50:43.016922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.846 [2024-07-14 04:50:43.016952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-14 04:50:43.026657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.846 [2024-07-14 04:50:43.026810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.846 [2024-07-14 04:50:43.026837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.846 [2024-07-14 04:50:43.026851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.846 [2024-07-14 04:50:43.026871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:22.846 [2024-07-14 04:50:43.026917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.846 qpair failed and we were unable to recover it. 00:34:23.105 [2024-07-14 04:50:43.036715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.105 [2024-07-14 04:50:43.036903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.105 [2024-07-14 04:50:43.036940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.105 [2024-07-14 04:50:43.036954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.105 [2024-07-14 04:50:43.036967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.105 [2024-07-14 04:50:43.036999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.105 qpair failed and we were unable to recover it. 00:34:23.105 [2024-07-14 04:50:43.046716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.105 [2024-07-14 04:50:43.046924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.105 [2024-07-14 04:50:43.046955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.105 [2024-07-14 04:50:43.046974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.105 [2024-07-14 04:50:43.046988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.105 [2024-07-14 04:50:43.047033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.105 qpair failed and we were unable to recover it. 00:34:23.105 [2024-07-14 04:50:43.056823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.105 [2024-07-14 04:50:43.056990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.105 [2024-07-14 04:50:43.057016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.105 [2024-07-14 04:50:43.057030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.105 [2024-07-14 04:50:43.057043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.105 [2024-07-14 04:50:43.057074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.105 qpair failed and we were unable to recover it. 00:34:23.105 [2024-07-14 04:50:43.066769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.105 [2024-07-14 04:50:43.066924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.105 [2024-07-14 04:50:43.066950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.105 [2024-07-14 04:50:43.066965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.105 [2024-07-14 04:50:43.066979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.105 [2024-07-14 04:50:43.067008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.105 qpair failed and we were unable to recover it. 00:34:23.105 [2024-07-14 04:50:43.076851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.105 [2024-07-14 04:50:43.077014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.105 [2024-07-14 04:50:43.077040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.105 [2024-07-14 04:50:43.077054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.105 [2024-07-14 04:50:43.077067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.105 [2024-07-14 04:50:43.077097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.105 qpair failed and we were unable to recover it. 00:34:23.105 [2024-07-14 04:50:43.086847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.105 [2024-07-14 04:50:43.087051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.105 [2024-07-14 04:50:43.087077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.105 [2024-07-14 04:50:43.087092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.105 [2024-07-14 04:50:43.087105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.105 [2024-07-14 04:50:43.087141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.105 qpair failed and we were unable to recover it. 00:34:23.105 [2024-07-14 04:50:43.096957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.105 [2024-07-14 04:50:43.097144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.105 [2024-07-14 04:50:43.097171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.105 [2024-07-14 04:50:43.097191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.105 [2024-07-14 04:50:43.097205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.105 [2024-07-14 04:50:43.097236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.105 qpair failed and we were unable to recover it. 00:34:23.105 [2024-07-14 04:50:43.106909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.105 [2024-07-14 04:50:43.107062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.105 [2024-07-14 04:50:43.107089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.105 [2024-07-14 04:50:43.107104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.107117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.107147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.116954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.117148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.117174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.117188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.117201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.117232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.126967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.127114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.127140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.127154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.127167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.127196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.137027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.137191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.137219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.137234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.137248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.137278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.147049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.147200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.147226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.147241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.147254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.147284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.157084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.157254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.157280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.157294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.157307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.157337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.167133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.167292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.167318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.167332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.167345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.167374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.177133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.177282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.177308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.177322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.177341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.177371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.187158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.187337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.187364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.187378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.187391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.187421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.197225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.197417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.197443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.197457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.197470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.197499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.207218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.207374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.207400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.207414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.207427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.207457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.217251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.217410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.106 [2024-07-14 04:50:43.217436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.106 [2024-07-14 04:50:43.217450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.106 [2024-07-14 04:50:43.217463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4498000b90 00:34:23.106 [2024-07-14 04:50:43.217493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.106 qpair failed and we were unable to recover it. 00:34:23.106 [2024-07-14 04:50:43.227284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.106 [2024-07-14 04:50:43.227437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.107 [2024-07-14 04:50:43.227472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.107 [2024-07-14 04:50:43.227490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.107 [2024-07-14 04:50:43.227504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:23.107 [2024-07-14 04:50:43.227536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.107 qpair failed and we were unable to recover it. 00:34:23.107 [2024-07-14 04:50:43.237358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.107 [2024-07-14 04:50:43.237533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.107 [2024-07-14 04:50:43.237562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.107 [2024-07-14 04:50:43.237577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.107 [2024-07-14 04:50:43.237590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f44a8000b90 00:34:23.107 [2024-07-14 04:50:43.237619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.107 qpair failed and we were unable to recover it. 00:34:23.107 [2024-07-14 04:50:43.237756] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:23.107 A controller has encountered a failure and is being reset. 00:34:23.364 Controller properly reset. 00:34:23.364 Initializing NVMe Controllers 00:34:23.364 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:23.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:23.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:23.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:23.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:23.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:23.364 Initialization complete. Launching workers. 00:34:23.364 Starting thread on core 1 00:34:23.364 Starting thread on core 2 00:34:23.364 Starting thread on core 3 00:34:23.364 Starting thread on core 0 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:23.364 00:34:23.364 real 0m10.848s 00:34:23.364 user 0m16.673s 00:34:23.364 sys 0m5.922s 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.364 ************************************ 00:34:23.364 END TEST nvmf_target_disconnect_tc2 00:34:23.364 ************************************ 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:23.364 rmmod nvme_tcp 00:34:23.364 rmmod nvme_fabrics 00:34:23.364 rmmod nvme_keyring 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2949503 ']' 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2949503 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2949503 ']' 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 2949503 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2949503 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2949503' 00:34:23.364 killing process with pid 2949503 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 2949503 00:34:23.364 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 2949503 00:34:23.621 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:23.621 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:23.621 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:23.621 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:23.621 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:23.621 04:50:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.621 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:23.621 04:50:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.154 04:50:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:26.154 00:34:26.154 real 0m15.619s 00:34:26.154 user 0m43.109s 00:34:26.154 sys 0m7.912s 00:34:26.154 04:50:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.154 04:50:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:26.154 ************************************ 00:34:26.154 END TEST nvmf_target_disconnect 00:34:26.154 ************************************ 00:34:26.154 04:50:45 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:26.154 04:50:45 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.154 04:50:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.154 04:50:45 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:26.154 00:34:26.154 real 27m8.454s 00:34:26.154 user 74m33.414s 00:34:26.154 sys 6m21.593s 00:34:26.154 04:50:45 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.154 04:50:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.154 ************************************ 00:34:26.154 END TEST nvmf_tcp 00:34:26.154 ************************************ 00:34:26.154 04:50:45 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:26.154 04:50:45 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:26.154 04:50:45 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:26.154 04:50:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:26.154 04:50:45 -- common/autotest_common.sh@10 -- # set +x 00:34:26.154 ************************************ 00:34:26.154 START TEST spdkcli_nvmf_tcp 00:34:26.154 ************************************ 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:26.154 * Looking for test storage... 00:34:26.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2950694 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2950694 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 2950694 ']' 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:26.154 04:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.154 [2024-07-14 04:50:45.929882] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:26.155 [2024-07-14 04:50:45.929982] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950694 ] 00:34:26.155 EAL: No free 2048 kB hugepages reported on node 1 00:34:26.155 [2024-07-14 04:50:45.992670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:26.155 [2024-07-14 04:50:46.083888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.155 [2024-07-14 04:50:46.083897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.155 04:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:26.155 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:26.155 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:26.155 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:26.155 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:26.155 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:26.155 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:26.155 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:26.155 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:26.155 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:26.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:26.155 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:26.155 ' 00:34:28.698 [2024-07-14 04:50:48.735282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.074 [2024-07-14 04:50:49.955528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:32.607 [2024-07-14 04:50:52.242864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:34.517 [2024-07-14 04:50:54.185081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:35.888 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:35.888 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:35.888 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:35.888 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:35.888 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:35.888 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:35.888 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:35.888 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:35.888 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:35.888 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:35.888 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:35.888 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:35.888 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:35.889 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:35.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:35.889 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:35.889 04:50:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:35.889 04:50:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.889 04:50:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.889 04:50:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:35.889 04:50:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:35.889 04:50:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.889 04:50:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:35.889 04:50:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:36.147 04:50:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:36.147 04:50:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:36.147 04:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:36.147 04:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.147 04:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.147 04:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:36.147 04:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:36.147 04:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.147 04:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:36.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:36.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:36.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:36.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:36.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:36.147 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:36.147 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:36.147 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:36.147 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:36.147 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:36.147 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:36.147 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:36.147 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:36.147 ' 00:34:41.424 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:41.424 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:41.424 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:41.424 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:41.424 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:41.424 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:41.424 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:41.424 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:41.424 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:41.424 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:41.424 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:41.424 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:41.424 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:41.424 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2950694 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2950694 ']' 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2950694 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2950694 00:34:41.424 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:41.425 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:41.425 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2950694' 00:34:41.425 killing process with pid 2950694 00:34:41.425 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 2950694 00:34:41.425 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 2950694 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2950694 ']' 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2950694 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2950694 ']' 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2950694 00:34:41.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2950694) - No such process 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 2950694 is not found' 00:34:41.683 Process with pid 2950694 is not found 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:41.683 00:34:41.683 real 0m15.942s 00:34:41.683 user 0m33.686s 00:34:41.683 sys 0m0.792s 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:41.683 04:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:41.683 ************************************ 00:34:41.683 END TEST spdkcli_nvmf_tcp 00:34:41.683 ************************************ 00:34:41.683 04:51:01 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:41.683 04:51:01 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:41.683 04:51:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:41.683 04:51:01 -- common/autotest_common.sh@10 -- # set +x 00:34:41.683 ************************************ 00:34:41.683 START TEST nvmf_identify_passthru 00:34:41.683 ************************************ 00:34:41.683 04:51:01 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:41.683 * Looking for test storage... 00:34:41.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:41.683 04:51:01 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.683 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.683 04:51:01 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.683 04:51:01 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.683 04:51:01 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.683 04:51:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.683 04:51:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.684 04:51:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.684 04:51:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:41.684 04:51:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:41.684 04:51:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.684 04:51:01 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.684 04:51:01 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.684 04:51:01 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.684 04:51:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.684 04:51:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.684 04:51:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.684 04:51:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:41.684 04:51:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.684 04:51:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.684 04:51:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:41.684 04:51:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:41.684 04:51:01 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:41.684 04:51:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:43.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:43.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:43.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:43.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:43.585 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:43.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:34:43.844 00:34:43.844 --- 10.0.0.2 ping statistics --- 00:34:43.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.844 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:34:43.844 00:34:43.844 --- 10.0.0.1 ping statistics --- 00:34:43.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.844 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:43.844 04:51:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:43.844 04:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:43.844 04:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:34:43.844 04:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:34:43.844 04:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:43.844 04:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:43.844 04:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:43.844 04:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:43.844 04:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:44.102 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.283 04:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:48.283 04:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:48.283 04:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:48.283 04:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:48.283 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.463 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:52.463 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:52.463 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.463 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.463 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:52.463 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:52.463 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.463 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2955809 00:34:52.463 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:52.463 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:52.463 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2955809 00:34:52.464 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 2955809 ']' 00:34:52.464 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.464 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:52.464 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.464 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:52.464 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.464 [2024-07-14 04:51:12.496096] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:52.464 [2024-07-14 04:51:12.496193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.464 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.464 [2024-07-14 04:51:12.560249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:52.464 [2024-07-14 04:51:12.645481] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.464 [2024-07-14 04:51:12.645536] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.464 [2024-07-14 04:51:12.645564] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.464 [2024-07-14 04:51:12.645575] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.464 [2024-07-14 04:51:12.645584] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.464 [2024-07-14 04:51:12.645664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.464 [2024-07-14 04:51:12.645731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:52.464 [2024-07-14 04:51:12.645797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:52.464 [2024-07-14 04:51:12.645800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.721 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:52.721 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:52.721 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:52.721 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.721 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.721 INFO: Log level set to 20 00:34:52.721 INFO: Requests: 00:34:52.721 { 00:34:52.721 "jsonrpc": "2.0", 00:34:52.721 "method": "nvmf_set_config", 00:34:52.721 "id": 1, 00:34:52.721 "params": { 00:34:52.721 "admin_cmd_passthru": { 00:34:52.721 "identify_ctrlr": true 00:34:52.721 } 00:34:52.721 } 00:34:52.721 } 00:34:52.721 00:34:52.721 INFO: response: 00:34:52.721 { 00:34:52.721 "jsonrpc": "2.0", 00:34:52.721 "id": 1, 00:34:52.721 "result": true 00:34:52.721 } 00:34:52.721 00:34:52.721 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.721 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:52.721 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.721 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.721 INFO: Setting log level to 20 00:34:52.721 INFO: Setting log level to 20 00:34:52.721 INFO: Log level set to 20 00:34:52.721 INFO: Log level set to 20 00:34:52.721 INFO: Requests: 00:34:52.721 { 00:34:52.721 "jsonrpc": "2.0", 00:34:52.721 "method": "framework_start_init", 00:34:52.721 "id": 1 00:34:52.721 } 00:34:52.721 00:34:52.721 INFO: Requests: 00:34:52.721 { 00:34:52.721 "jsonrpc": "2.0", 00:34:52.721 "method": "framework_start_init", 00:34:52.721 "id": 1 00:34:52.721 } 00:34:52.721 00:34:52.721 [2024-07-14 04:51:12.816084] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:52.721 INFO: response: 00:34:52.721 { 00:34:52.721 "jsonrpc": "2.0", 00:34:52.721 "id": 1, 00:34:52.721 "result": true 00:34:52.721 } 00:34:52.721 00:34:52.721 INFO: response: 00:34:52.721 { 00:34:52.721 "jsonrpc": "2.0", 00:34:52.721 "id": 1, 00:34:52.721 "result": true 00:34:52.721 } 00:34:52.721 00:34:52.721 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.722 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:52.722 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.722 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.722 INFO: Setting log level to 40 00:34:52.722 INFO: Setting log level to 40 00:34:52.722 INFO: Setting log level to 40 00:34:52.722 [2024-07-14 04:51:12.825984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:52.722 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.722 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:52.722 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.722 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.722 04:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:52.722 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.722 04:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.000 Nvme0n1 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.000 [2024-07-14 04:51:15.710974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.000 [ 00:34:56.000 { 00:34:56.000 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:56.000 "subtype": "Discovery", 00:34:56.000 "listen_addresses": [], 00:34:56.000 "allow_any_host": true, 00:34:56.000 "hosts": [] 00:34:56.000 }, 00:34:56.000 { 00:34:56.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.000 "subtype": "NVMe", 00:34:56.000 "listen_addresses": [ 00:34:56.000 { 00:34:56.000 "trtype": "TCP", 00:34:56.000 "adrfam": "IPv4", 00:34:56.000 "traddr": "10.0.0.2", 00:34:56.000 "trsvcid": "4420" 00:34:56.000 } 00:34:56.000 ], 00:34:56.000 "allow_any_host": true, 00:34:56.000 "hosts": [], 00:34:56.000 "serial_number": "SPDK00000000000001", 00:34:56.000 "model_number": "SPDK bdev Controller", 00:34:56.000 "max_namespaces": 1, 00:34:56.000 "min_cntlid": 1, 00:34:56.000 "max_cntlid": 65519, 00:34:56.000 "namespaces": [ 00:34:56.000 { 00:34:56.000 "nsid": 1, 00:34:56.000 "bdev_name": "Nvme0n1", 00:34:56.000 "name": "Nvme0n1", 00:34:56.000 "nguid": "EF8CDA39CA814676B5C615E972524F4C", 00:34:56.000 "uuid": "ef8cda39-ca81-4676-b5c6-15e972524f4c" 00:34:56.000 } 00:34:56.000 ] 00:34:56.000 } 00:34:56.000 ] 00:34:56.000 04:51:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:56.000 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:56.000 04:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:56.001 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.001 04:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:56.001 04:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:56.001 04:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:56.001 04:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.001 04:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:56.001 04:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:56.001 rmmod nvme_tcp 00:34:56.001 rmmod nvme_fabrics 00:34:56.001 rmmod nvme_keyring 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2955809 ']' 00:34:56.001 04:51:16 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2955809 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 2955809 ']' 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 2955809 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2955809 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2955809' 00:34:56.001 killing process with pid 2955809 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 2955809 00:34:56.001 04:51:16 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 2955809 00:34:57.899 04:51:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:57.899 04:51:17 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:57.899 04:51:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:57.899 04:51:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:57.899 04:51:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:57.899 04:51:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.899 04:51:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:57.899 04:51:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.801 04:51:19 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:59.801 00:34:59.801 real 0m17.943s 00:34:59.801 user 0m26.686s 00:34:59.801 sys 0m2.250s 00:34:59.801 04:51:19 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:59.801 04:51:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.801 ************************************ 00:34:59.801 END TEST nvmf_identify_passthru 00:34:59.801 ************************************ 00:34:59.801 04:51:19 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:59.801 04:51:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:59.801 04:51:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:59.801 04:51:19 -- common/autotest_common.sh@10 -- # set +x 00:34:59.801 ************************************ 00:34:59.801 START TEST nvmf_dif 00:34:59.801 ************************************ 00:34:59.801 04:51:19 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:59.801 * Looking for test storage... 00:34:59.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:59.801 04:51:19 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.801 04:51:19 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.801 04:51:19 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.801 04:51:19 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.801 04:51:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.801 04:51:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.801 04:51:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.801 04:51:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:59.801 04:51:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:59.801 04:51:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:59.801 04:51:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:59.801 04:51:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:59.801 04:51:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:59.801 04:51:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.801 04:51:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:59.801 04:51:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:59.801 04:51:19 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:59.801 04:51:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:01.700 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:01.700 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:01.700 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:01.700 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:01.700 04:51:21 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.701 04:51:21 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.701 04:51:21 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:01.701 04:51:21 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:01.701 04:51:21 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.701 04:51:21 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.701 04:51:21 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.701 04:51:21 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.701 04:51:21 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:01.701 04:51:21 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.958 04:51:21 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.958 04:51:21 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.958 04:51:21 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:01.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:35:01.958 00:35:01.958 --- 10.0.0.2 ping statistics --- 00:35:01.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.958 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:35:01.958 04:51:21 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:35:01.958 00:35:01.958 --- 10.0.0.1 ping statistics --- 00:35:01.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.958 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:35:01.958 04:51:21 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.958 04:51:21 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:01.958 04:51:21 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:01.958 04:51:21 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:02.892 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:02.892 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:02.892 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:02.892 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:02.892 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:02.892 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:02.892 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:02.892 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:02.892 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:02.892 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:02.892 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:02.892 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:02.893 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:02.893 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:02.893 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:02.893 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:02.893 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:03.151 04:51:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:03.151 04:51:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:03.151 04:51:23 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:03.151 04:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2958938 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:03.151 04:51:23 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2958938 00:35:03.151 04:51:23 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 2958938 ']' 00:35:03.151 04:51:23 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.151 04:51:23 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:03.151 04:51:23 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.151 04:51:23 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:03.151 04:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.151 [2024-07-14 04:51:23.177155] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:03.151 [2024-07-14 04:51:23.177243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.151 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.151 [2024-07-14 04:51:23.246643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.151 [2024-07-14 04:51:23.335731] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.151 [2024-07-14 04:51:23.335795] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.151 [2024-07-14 04:51:23.335812] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.151 [2024-07-14 04:51:23.335826] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.151 [2024-07-14 04:51:23.335838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.151 [2024-07-14 04:51:23.335877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:03.410 04:51:23 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.410 04:51:23 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.410 04:51:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:03.410 04:51:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.410 [2024-07-14 04:51:23.481585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.410 04:51:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:03.410 04:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.410 ************************************ 00:35:03.410 START TEST fio_dif_1_default 00:35:03.410 ************************************ 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.410 bdev_null0 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.410 [2024-07-14 04:51:23.537848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:03.410 { 00:35:03.410 "params": { 00:35:03.410 "name": "Nvme$subsystem", 00:35:03.410 "trtype": "$TEST_TRANSPORT", 00:35:03.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.410 "adrfam": "ipv4", 00:35:03.410 "trsvcid": "$NVMF_PORT", 00:35:03.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.410 "hdgst": ${hdgst:-false}, 00:35:03.410 "ddgst": ${ddgst:-false} 00:35:03.410 }, 00:35:03.410 "method": "bdev_nvme_attach_controller" 00:35:03.410 } 00:35:03.410 EOF 00:35:03.410 )") 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:03.410 "params": { 00:35:03.410 "name": "Nvme0", 00:35:03.410 "trtype": "tcp", 00:35:03.410 "traddr": "10.0.0.2", 00:35:03.410 "adrfam": "ipv4", 00:35:03.410 "trsvcid": "4420", 00:35:03.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.410 "hdgst": false, 00:35:03.410 "ddgst": false 00:35:03.410 }, 00:35:03.410 "method": "bdev_nvme_attach_controller" 00:35:03.410 }' 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:03.410 04:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.668 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:03.668 fio-3.35 00:35:03.668 Starting 1 thread 00:35:03.668 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.861 00:35:15.861 filename0: (groupid=0, jobs=1): err= 0: pid=2959169: Sun Jul 14 04:51:34 2024 00:35:15.861 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10015msec) 00:35:15.861 slat (nsec): min=4419, max=70405, avg=9425.50, stdev=3415.23 00:35:15.861 clat (usec): min=40902, max=46439, avg=41524.38, stdev=595.19 00:35:15.861 lat (usec): min=40910, max=46463, avg=41533.81, stdev=595.29 00:35:15.861 clat percentiles (usec): 00:35:15.861 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:15.861 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:35:15.861 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:15.861 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:35:15.861 | 99.99th=[46400] 00:35:15.861 bw ( KiB/s): min= 351, max= 416, per=99.47%, avg=383.95, stdev=10.55, samples=20 00:35:15.861 iops : min= 87, max= 104, avg=95.95, stdev= 2.76, samples=20 00:35:15.861 lat (msec) : 50=100.00% 00:35:15.861 cpu : usr=89.37%, sys=10.38%, ctx=12, majf=0, minf=273 00:35:15.861 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.861 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.861 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:15.861 00:35:15.861 Run status group 0 (all jobs): 00:35:15.861 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10015-10015msec 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 00:35:15.861 real 0m11.188s 00:35:15.861 user 0m10.332s 00:35:15.861 sys 0m1.330s 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 ************************************ 00:35:15.861 END TEST fio_dif_1_default 00:35:15.861 ************************************ 00:35:15.861 04:51:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:15.861 04:51:34 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:15.861 04:51:34 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 ************************************ 00:35:15.861 START TEST fio_dif_1_multi_subsystems 00:35:15.861 ************************************ 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 bdev_null0 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 [2024-07-14 04:51:34.777956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 bdev_null1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:15.861 { 00:35:15.861 "params": { 00:35:15.861 "name": "Nvme$subsystem", 00:35:15.861 "trtype": "$TEST_TRANSPORT", 00:35:15.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.861 "adrfam": "ipv4", 00:35:15.861 "trsvcid": "$NVMF_PORT", 00:35:15.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.861 "hdgst": ${hdgst:-false}, 00:35:15.861 "ddgst": ${ddgst:-false} 00:35:15.861 }, 00:35:15.861 "method": "bdev_nvme_attach_controller" 00:35:15.861 } 00:35:15.861 EOF 00:35:15.861 )") 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.861 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:15.862 { 00:35:15.862 "params": { 00:35:15.862 "name": "Nvme$subsystem", 00:35:15.862 "trtype": "$TEST_TRANSPORT", 00:35:15.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.862 "adrfam": "ipv4", 00:35:15.862 "trsvcid": "$NVMF_PORT", 00:35:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.862 "hdgst": ${hdgst:-false}, 00:35:15.862 "ddgst": ${ddgst:-false} 00:35:15.862 }, 00:35:15.862 "method": "bdev_nvme_attach_controller" 00:35:15.862 } 00:35:15.862 EOF 00:35:15.862 )") 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:15.862 "params": { 00:35:15.862 "name": "Nvme0", 00:35:15.862 "trtype": "tcp", 00:35:15.862 "traddr": "10.0.0.2", 00:35:15.862 "adrfam": "ipv4", 00:35:15.862 "trsvcid": "4420", 00:35:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.862 "hdgst": false, 00:35:15.862 "ddgst": false 00:35:15.862 }, 00:35:15.862 "method": "bdev_nvme_attach_controller" 00:35:15.862 },{ 00:35:15.862 "params": { 00:35:15.862 "name": "Nvme1", 00:35:15.862 "trtype": "tcp", 00:35:15.862 "traddr": "10.0.0.2", 00:35:15.862 "adrfam": "ipv4", 00:35:15.862 "trsvcid": "4420", 00:35:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:15.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:15.862 "hdgst": false, 00:35:15.862 "ddgst": false 00:35:15.862 }, 00:35:15.862 "method": "bdev_nvme_attach_controller" 00:35:15.862 }' 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:15.862 04:51:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.862 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:15.862 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:15.862 fio-3.35 00:35:15.862 Starting 2 threads 00:35:15.862 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.827 00:35:25.827 filename0: (groupid=0, jobs=1): err= 0: pid=2960573: Sun Jul 14 04:51:45 2024 00:35:25.827 read: IOPS=184, BW=738KiB/s (755kB/s)(7408KiB/10042msec) 00:35:25.827 slat (nsec): min=6744, max=43004, avg=10982.07, stdev=5672.50 00:35:25.827 clat (usec): min=864, max=42782, avg=21654.30, stdev=20418.50 00:35:25.827 lat (usec): min=871, max=42793, avg=21665.29, stdev=20418.26 00:35:25.827 clat percentiles (usec): 00:35:25.827 | 1.00th=[ 906], 5.00th=[ 947], 10.00th=[ 963], 20.00th=[ 988], 00:35:25.827 | 30.00th=[ 1004], 40.00th=[ 1037], 50.00th=[41157], 60.00th=[41681], 00:35:25.827 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:25.827 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:25.827 | 99.99th=[42730] 00:35:25.827 bw ( KiB/s): min= 640, max= 768, per=49.98%, avg=739.20, stdev=37.29, samples=20 00:35:25.827 iops : min= 160, max= 192, avg=184.80, stdev= 9.32, samples=20 00:35:25.827 lat (usec) : 1000=28.51% 00:35:25.827 lat (msec) : 2=20.95%, 50=50.54% 00:35:25.827 cpu : usr=93.70%, sys=6.01%, ctx=18, majf=0, minf=97 00:35:25.827 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.827 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.827 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:25.827 filename1: (groupid=0, jobs=1): err= 0: pid=2960574: Sun Jul 14 04:51:45 2024 00:35:25.827 read: IOPS=185, BW=744KiB/s (762kB/s)(7440KiB/10001msec) 00:35:25.827 slat (nsec): min=6947, max=60680, avg=10817.06, stdev=5435.95 00:35:25.827 clat (usec): min=874, max=42494, avg=21473.90, stdev=20447.05 00:35:25.827 lat (usec): min=882, max=42526, avg=21484.71, stdev=20446.56 00:35:25.827 clat percentiles (usec): 00:35:25.827 | 1.00th=[ 898], 5.00th=[ 930], 10.00th=[ 947], 20.00th=[ 963], 00:35:25.827 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[41157], 60.00th=[41681], 00:35:25.827 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:25.827 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:25.827 | 99.99th=[42730] 00:35:25.827 bw ( KiB/s): min= 672, max= 768, per=50.18%, avg=742.74, stdev=34.69, samples=19 00:35:25.827 iops : min= 168, max= 192, avg=185.68, stdev= 8.67, samples=19 00:35:25.827 lat (usec) : 1000=39.30% 00:35:25.827 lat (msec) : 2=10.59%, 50=50.11% 00:35:25.827 cpu : usr=94.55%, sys=5.15%, ctx=14, majf=0, minf=176 00:35:25.827 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.828 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.828 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:25.828 00:35:25.828 Run status group 0 (all jobs): 00:35:25.828 READ: bw=1479KiB/s (1514kB/s), 738KiB/s-744KiB/s (755kB/s-762kB/s), io=14.5MiB (15.2MB), run=10001-10042msec 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.150 00:35:26.150 real 0m11.466s 00:35:26.150 user 0m20.425s 00:35:26.150 sys 0m1.389s 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 ************************************ 00:35:26.150 END TEST fio_dif_1_multi_subsystems 00:35:26.150 ************************************ 00:35:26.150 04:51:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:26.150 04:51:46 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:26.150 04:51:46 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 ************************************ 00:35:26.150 START TEST fio_dif_rand_params 00:35:26.150 ************************************ 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 bdev_null0 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.150 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.150 [2024-07-14 04:51:46.289415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:26.435 { 00:35:26.435 "params": { 00:35:26.435 "name": "Nvme$subsystem", 00:35:26.435 "trtype": "$TEST_TRANSPORT", 00:35:26.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.435 "adrfam": "ipv4", 00:35:26.435 "trsvcid": "$NVMF_PORT", 00:35:26.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.435 "hdgst": ${hdgst:-false}, 00:35:26.435 "ddgst": ${ddgst:-false} 00:35:26.435 }, 00:35:26.435 "method": "bdev_nvme_attach_controller" 00:35:26.435 } 00:35:26.435 EOF 00:35:26.435 )") 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:26.435 "params": { 00:35:26.435 "name": "Nvme0", 00:35:26.435 "trtype": "tcp", 00:35:26.435 "traddr": "10.0.0.2", 00:35:26.435 "adrfam": "ipv4", 00:35:26.435 "trsvcid": "4420", 00:35:26.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:26.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:26.435 "hdgst": false, 00:35:26.435 "ddgst": false 00:35:26.435 }, 00:35:26.435 "method": "bdev_nvme_attach_controller" 00:35:26.435 }' 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:26.435 04:51:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.435 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:26.435 ... 00:35:26.435 fio-3.35 00:35:26.435 Starting 3 threads 00:35:26.435 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.992 00:35:32.992 filename0: (groupid=0, jobs=1): err= 0: pid=2961971: Sun Jul 14 04:51:52 2024 00:35:32.992 read: IOPS=140, BW=17.5MiB/s (18.3MB/s)(87.6MiB/5007msec) 00:35:32.992 slat (nsec): min=7242, max=65027, avg=11737.70, stdev=3372.59 00:35:32.992 clat (usec): min=7900, max=94744, avg=21402.67, stdev=17057.69 00:35:32.992 lat (usec): min=7912, max=94756, avg=21414.41, stdev=17057.67 00:35:32.992 clat percentiles (usec): 00:35:32.992 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10552], 00:35:32.992 | 30.00th=[11207], 40.00th=[11994], 50.00th=[13042], 60.00th=[14222], 00:35:32.992 | 70.00th=[15664], 80.00th=[50070], 90.00th=[52167], 95.00th=[53740], 00:35:32.992 | 99.00th=[55837], 99.50th=[56886], 99.90th=[94897], 99.95th=[94897], 00:35:32.992 | 99.99th=[94897] 00:35:32.992 bw ( KiB/s): min=11520, max=27392, per=28.36%, avg=17868.80, stdev=4828.39, samples=10 00:35:32.992 iops : min= 90, max= 214, avg=139.60, stdev=37.72, samples=10 00:35:32.992 lat (msec) : 10=12.41%, 20=64.76%, 50=2.00%, 100=20.83% 00:35:32.992 cpu : usr=92.97%, sys=6.63%, ctx=7, majf=0, minf=150 00:35:32.992 IO depths : 1=3.6%, 2=96.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.992 issued rwts: total=701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.992 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:32.992 filename0: (groupid=0, jobs=1): err= 0: pid=2961972: Sun Jul 14 04:51:52 2024 00:35:32.992 read: IOPS=154, BW=19.3MiB/s (20.3MB/s)(97.6MiB/5046msec) 00:35:32.992 slat (nsec): min=7122, max=34829, avg=11358.05, stdev=2697.29 00:35:32.992 clat (usec): min=5678, max=91945, avg=19307.60, stdev=17296.52 00:35:32.992 lat (usec): min=5689, max=91958, avg=19318.96, stdev=17296.49 00:35:32.992 clat percentiles (usec): 00:35:32.992 | 1.00th=[ 6259], 5.00th=[ 6783], 10.00th=[ 7439], 20.00th=[ 8979], 00:35:32.992 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[11207], 60.00th=[12256], 00:35:32.992 | 70.00th=[13304], 80.00th=[49021], 90.00th=[51119], 95.00th=[53216], 00:35:32.992 | 99.00th=[54789], 99.50th=[55313], 99.90th=[91751], 99.95th=[91751], 00:35:32.992 | 99.99th=[91751] 00:35:32.992 bw ( KiB/s): min=12288, max=28672, per=31.61%, avg=19920.90, stdev=5946.85, samples=10 00:35:32.992 iops : min= 96, max= 224, avg=155.60, stdev=46.45, samples=10 00:35:32.992 lat (msec) : 10=38.41%, 20=39.95%, 50=5.38%, 100=16.26% 00:35:32.992 cpu : usr=92.51%, sys=6.86%, ctx=13, majf=0, minf=74 00:35:32.992 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.992 issued rwts: total=781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.993 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:32.993 filename0: (groupid=0, jobs=1): err= 0: pid=2961973: Sun Jul 14 04:51:52 2024 00:35:32.993 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(125MiB/5043msec) 00:35:32.993 slat (nsec): min=7122, max=27162, avg=11052.95, stdev=2165.64 00:35:32.993 clat (usec): min=5987, max=92925, avg=15040.69, stdev=14299.57 00:35:32.993 lat (usec): min=5998, max=92937, avg=15051.74, stdev=14299.58 00:35:32.993 clat percentiles (usec): 00:35:32.993 | 1.00th=[ 6325], 5.00th=[ 6587], 10.00th=[ 6849], 20.00th=[ 8094], 00:35:32.993 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10814], 00:35:32.993 | 70.00th=[12125], 80.00th=[13304], 90.00th=[49021], 95.00th=[50594], 00:35:32.993 | 99.00th=[53740], 99.50th=[55837], 99.90th=[92799], 99.95th=[92799], 00:35:32.993 | 99.99th=[92799] 00:35:32.993 bw ( KiB/s): min=19200, max=33024, per=40.60%, avg=25580.20, stdev=4649.95, samples=10 00:35:32.993 iops : min= 150, max= 258, avg=199.80, stdev=36.29, samples=10 00:35:32.993 lat (msec) : 10=49.60%, 20=38.42%, 50=4.49%, 100=7.49% 00:35:32.993 cpu : usr=90.52%, sys=8.49%, ctx=17, majf=0, minf=85 00:35:32.993 IO depths : 1=3.7%, 2=96.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.993 issued rwts: total=1002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.993 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:32.993 00:35:32.993 Run status group 0 (all jobs): 00:35:32.993 READ: bw=61.5MiB/s (64.5MB/s), 17.5MiB/s-24.8MiB/s (18.3MB/s-26.0MB/s), io=311MiB (326MB), run=5007-5046msec 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 bdev_null0 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 [2024-07-14 04:51:52.436378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 bdev_null1 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 bdev_null2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:32.993 { 00:35:32.993 "params": { 00:35:32.993 "name": "Nvme$subsystem", 00:35:32.993 "trtype": "$TEST_TRANSPORT", 00:35:32.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.993 "adrfam": "ipv4", 00:35:32.993 "trsvcid": "$NVMF_PORT", 00:35:32.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.993 "hdgst": ${hdgst:-false}, 00:35:32.993 "ddgst": ${ddgst:-false} 00:35:32.993 }, 00:35:32.993 "method": "bdev_nvme_attach_controller" 00:35:32.993 } 00:35:32.993 EOF 00:35:32.993 )") 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.993 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:32.994 { 00:35:32.994 "params": { 00:35:32.994 "name": "Nvme$subsystem", 00:35:32.994 "trtype": "$TEST_TRANSPORT", 00:35:32.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.994 "adrfam": "ipv4", 00:35:32.994 "trsvcid": "$NVMF_PORT", 00:35:32.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.994 "hdgst": ${hdgst:-false}, 00:35:32.994 "ddgst": ${ddgst:-false} 00:35:32.994 }, 00:35:32.994 "method": "bdev_nvme_attach_controller" 00:35:32.994 } 00:35:32.994 EOF 00:35:32.994 )") 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:32.994 { 00:35:32.994 "params": { 00:35:32.994 "name": "Nvme$subsystem", 00:35:32.994 "trtype": "$TEST_TRANSPORT", 00:35:32.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.994 "adrfam": "ipv4", 00:35:32.994 "trsvcid": "$NVMF_PORT", 00:35:32.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.994 "hdgst": ${hdgst:-false}, 00:35:32.994 "ddgst": ${ddgst:-false} 00:35:32.994 }, 00:35:32.994 "method": "bdev_nvme_attach_controller" 00:35:32.994 } 00:35:32.994 EOF 00:35:32.994 )") 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:32.994 "params": { 00:35:32.994 "name": "Nvme0", 00:35:32.994 "trtype": "tcp", 00:35:32.994 "traddr": "10.0.0.2", 00:35:32.994 "adrfam": "ipv4", 00:35:32.994 "trsvcid": "4420", 00:35:32.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:32.994 "hdgst": false, 00:35:32.994 "ddgst": false 00:35:32.994 }, 00:35:32.994 "method": "bdev_nvme_attach_controller" 00:35:32.994 },{ 00:35:32.994 "params": { 00:35:32.994 "name": "Nvme1", 00:35:32.994 "trtype": "tcp", 00:35:32.994 "traddr": "10.0.0.2", 00:35:32.994 "adrfam": "ipv4", 00:35:32.994 "trsvcid": "4420", 00:35:32.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:32.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:32.994 "hdgst": false, 00:35:32.994 "ddgst": false 00:35:32.994 }, 00:35:32.994 "method": "bdev_nvme_attach_controller" 00:35:32.994 },{ 00:35:32.994 "params": { 00:35:32.994 "name": "Nvme2", 00:35:32.994 "trtype": "tcp", 00:35:32.994 "traddr": "10.0.0.2", 00:35:32.994 "adrfam": "ipv4", 00:35:32.994 "trsvcid": "4420", 00:35:32.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:32.994 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:32.994 "hdgst": false, 00:35:32.994 "ddgst": false 00:35:32.994 }, 00:35:32.994 "method": "bdev_nvme_attach_controller" 00:35:32.994 }' 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:32.994 04:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.994 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:32.994 ... 00:35:32.994 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:32.994 ... 00:35:32.994 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:32.994 ... 00:35:32.994 fio-3.35 00:35:32.994 Starting 24 threads 00:35:32.994 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.217 00:35:45.217 filename0: (groupid=0, jobs=1): err= 0: pid=2962830: Sun Jul 14 04:52:03 2024 00:35:45.217 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10149msec) 00:35:45.217 slat (usec): min=11, max=186, avg=34.59, stdev=23.91 00:35:45.217 clat (msec): min=114, max=420, avg=245.41, stdev=45.73 00:35:45.217 lat (msec): min=114, max=420, avg=245.45, stdev=45.73 00:35:45.217 clat percentiles (msec): 00:35:45.217 | 1.00th=[ 134], 5.00th=[ 174], 10.00th=[ 190], 20.00th=[ 211], 00:35:45.217 | 30.00th=[ 226], 40.00th=[ 236], 50.00th=[ 247], 60.00th=[ 253], 00:35:45.217 | 70.00th=[ 266], 80.00th=[ 279], 90.00th=[ 296], 95.00th=[ 317], 00:35:45.217 | 99.00th=[ 380], 99.50th=[ 388], 99.90th=[ 422], 99.95th=[ 422], 00:35:45.217 | 99.99th=[ 422] 00:35:45.217 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=256.00, stdev=55.43, samples=20 00:35:45.217 iops : min= 32, max= 96, avg=64.00, stdev=13.86, samples=20 00:35:45.217 lat (msec) : 250=57.16%, 500=42.84% 00:35:45.217 cpu : usr=95.74%, sys=2.40%, ctx=118, majf=0, minf=9 00:35:45.217 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:45.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.217 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.217 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.217 filename0: (groupid=0, jobs=1): err= 0: pid=2962831: Sun Jul 14 04:52:03 2024 00:35:45.217 read: IOPS=70, BW=283KiB/s (290kB/s)(2880KiB/10170msec) 00:35:45.217 slat (usec): min=3, max=105, avg=49.23, stdev=24.19 00:35:45.217 clat (msec): min=16, max=387, avg=223.93, stdev=68.76 00:35:45.217 lat (msec): min=16, max=387, avg=223.98, stdev=68.77 00:35:45.217 clat percentiles (msec): 00:35:45.218 | 1.00th=[ 17], 5.00th=[ 57], 10.00th=[ 140], 20.00th=[ 190], 00:35:45.218 | 30.00th=[ 209], 40.00th=[ 228], 50.00th=[ 239], 60.00th=[ 247], 00:35:45.218 | 70.00th=[ 255], 80.00th=[ 266], 90.00th=[ 296], 95.00th=[ 313], 00:35:45.218 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 388], 00:35:45.218 | 99.99th=[ 388] 00:35:45.218 bw ( KiB/s): min= 128, max= 640, per=4.35%, avg=281.60, stdev=97.31, samples=20 00:35:45.218 iops : min= 32, max= 160, avg=70.40, stdev=24.33, samples=20 00:35:45.218 lat (msec) : 20=2.22%, 50=2.50%, 100=1.94%, 250=58.06%, 500=35.28% 00:35:45.218 cpu : usr=98.13%, sys=1.37%, ctx=27, majf=0, minf=9 00:35:45.218 IO depths : 1=3.2%, 2=9.3%, 4=24.9%, 8=53.3%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:45.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.218 filename0: (groupid=0, jobs=1): err= 0: pid=2962832: Sun Jul 14 04:52:03 2024 00:35:45.218 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10157msec) 00:35:45.218 slat (usec): min=11, max=107, avg=38.83, stdev=13.51 00:35:45.218 clat (msec): min=142, max=407, avg=241.43, stdev=43.95 00:35:45.218 lat (msec): min=143, max=407, avg=241.47, stdev=43.95 00:35:45.218 clat percentiles (msec): 00:35:45.218 | 1.00th=[ 159], 5.00th=[ 171], 10.00th=[ 180], 20.00th=[ 207], 00:35:45.218 | 30.00th=[ 226], 40.00th=[ 232], 50.00th=[ 243], 60.00th=[ 247], 00:35:45.218 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 292], 95.00th=[ 321], 00:35:45.218 | 99.00th=[ 368], 99.50th=[ 388], 99.90th=[ 409], 99.95th=[ 409], 00:35:45.218 | 99.99th=[ 409] 00:35:45.218 bw ( KiB/s): min= 144, max= 384, per=4.05%, avg=262.40, stdev=46.55, samples=20 00:35:45.218 iops : min= 36, max= 96, avg=65.60, stdev=11.64, samples=20 00:35:45.218 lat (msec) : 250=63.39%, 500=36.61% 00:35:45.218 cpu : usr=98.39%, sys=1.18%, ctx=14, majf=0, minf=9 00:35:45.218 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:45.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.218 filename0: (groupid=0, jobs=1): err= 0: pid=2962833: Sun Jul 14 04:52:03 2024 00:35:45.218 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10140msec) 00:35:45.218 slat (nsec): min=8548, max=72177, avg=23151.53, stdev=10361.22 00:35:45.218 clat (msec): min=121, max=389, avg=247.00, stdev=35.66 00:35:45.218 lat (msec): min=121, max=389, avg=247.03, stdev=35.66 00:35:45.218 clat percentiles (msec): 00:35:45.218 | 1.00th=[ 165], 5.00th=[ 192], 10.00th=[ 205], 20.00th=[ 222], 00:35:45.218 | 30.00th=[ 230], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 255], 00:35:45.218 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 305], 00:35:45.218 | 99.00th=[ 342], 99.50th=[ 368], 99.90th=[ 388], 99.95th=[ 388], 00:35:45.218 | 99.99th=[ 388] 00:35:45.218 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=256.00, stdev=55.43, samples=20 00:35:45.218 iops : min= 32, max= 96, avg=64.00, stdev=13.86, samples=20 00:35:45.218 lat (msec) : 250=53.66%, 500=46.34% 00:35:45.218 cpu : usr=98.45%, sys=1.16%, ctx=23, majf=0, minf=9 00:35:45.218 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:35:45.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.218 filename0: (groupid=0, jobs=1): err= 0: pid=2962834: Sun Jul 14 04:52:03 2024 00:35:45.218 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10137msec) 00:35:45.218 slat (nsec): min=8296, max=82278, avg=30748.39, stdev=14792.65 00:35:45.218 clat (msec): min=164, max=344, avg=246.95, stdev=36.63 00:35:45.218 lat (msec): min=164, max=344, avg=246.98, stdev=36.62 00:35:45.218 clat percentiles (msec): 00:35:45.218 | 1.00th=[ 165], 5.00th=[ 192], 10.00th=[ 207], 20.00th=[ 224], 00:35:45.218 | 30.00th=[ 232], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:35:45.218 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 288], 95.00th=[ 321], 00:35:45.218 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:35:45.218 | 99.99th=[ 347] 00:35:45.218 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=256.00, stdev=58.73, samples=20 00:35:45.218 iops : min= 32, max= 96, avg=64.00, stdev=14.68, samples=20 00:35:45.218 lat (msec) : 250=60.98%, 500=39.02% 00:35:45.218 cpu : usr=97.90%, sys=1.56%, ctx=25, majf=0, minf=9 00:35:45.218 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:45.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.218 filename0: (groupid=0, jobs=1): err= 0: pid=2962835: Sun Jul 14 04:52:03 2024 00:35:45.218 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10132msec) 00:35:45.218 slat (nsec): min=8109, max=51354, avg=26649.03, stdev=10316.99 00:35:45.218 clat (msec): min=187, max=313, avg=246.87, stdev=29.29 00:35:45.218 lat (msec): min=187, max=313, avg=246.89, stdev=29.29 00:35:45.218 clat percentiles (msec): 00:35:45.218 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 207], 20.00th=[ 224], 00:35:45.218 | 30.00th=[ 232], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 255], 00:35:45.218 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 305], 00:35:45.218 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:35:45.218 | 99.99th=[ 313] 00:35:45.218 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=256.00, stdev=58.73, samples=20 00:35:45.218 iops : min= 32, max= 96, avg=64.00, stdev=14.68, samples=20 00:35:45.218 lat (msec) : 250=53.66%, 500=46.34% 00:35:45.218 cpu : usr=98.28%, sys=1.27%, ctx=42, majf=0, minf=9 00:35:45.218 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:45.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.218 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.218 filename0: (groupid=0, jobs=1): err= 0: pid=2962836: Sun Jul 14 04:52:03 2024 00:35:45.218 read: IOPS=82, BW=330KiB/s (338kB/s)(3328KiB/10086msec) 00:35:45.218 slat (usec): min=4, max=419, avg=21.78, stdev=26.91 00:35:45.218 clat (msec): min=18, max=320, avg=192.51, stdev=56.58 00:35:45.218 lat (msec): min=18, max=320, avg=192.53, stdev=56.57 00:35:45.218 clat percentiles (msec): 00:35:45.218 | 1.00th=[ 19], 5.00th=[ 57], 10.00th=[ 140], 20.00th=[ 159], 00:35:45.218 | 30.00th=[ 169], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 209], 00:35:45.218 | 70.00th=[ 222], 80.00th=[ 243], 90.00th=[ 268], 95.00th=[ 271], 00:35:45.218 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 321], 99.95th=[ 321], 00:35:45.218 | 99.99th=[ 321] 00:35:45.218 bw ( KiB/s): min= 256, max= 640, per=5.04%, avg=326.40, stdev=95.21, samples=20 00:35:45.218 iops : min= 64, max= 160, avg=81.60, stdev=23.80, samples=20 00:35:45.218 lat (msec) : 20=1.92%, 50=1.92%, 100=1.92%, 250=76.68%, 500=17.55% 00:35:45.218 cpu : usr=96.53%, sys=2.11%, ctx=48, majf=0, minf=9 00:35:45.218 IO depths : 1=2.4%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:35:45.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.219 filename0: (groupid=0, jobs=1): err= 0: pid=2962837: Sun Jul 14 04:52:03 2024 00:35:45.219 read: IOPS=66, BW=267KiB/s (274kB/s)(2688KiB/10063msec) 00:35:45.219 slat (usec): min=8, max=204, avg=58.91, stdev=31.60 00:35:45.219 clat (msec): min=142, max=352, avg=239.07, stdev=36.31 00:35:45.219 lat (msec): min=143, max=352, avg=239.12, stdev=36.31 00:35:45.219 clat percentiles (msec): 00:35:45.219 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 190], 20.00th=[ 209], 00:35:45.219 | 30.00th=[ 224], 40.00th=[ 232], 50.00th=[ 243], 60.00th=[ 245], 00:35:45.219 | 70.00th=[ 255], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 305], 00:35:45.219 | 99.00th=[ 313], 99.50th=[ 330], 99.90th=[ 351], 99.95th=[ 351], 00:35:45.219 | 99.99th=[ 351] 00:35:45.219 bw ( KiB/s): min= 128, max= 384, per=4.05%, avg=262.40, stdev=50.44, samples=20 00:35:45.219 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:35:45.219 lat (msec) : 250=66.37%, 500=33.63% 00:35:45.219 cpu : usr=97.38%, sys=1.69%, ctx=44, majf=0, minf=9 00:35:45.219 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:35:45.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.219 filename1: (groupid=0, jobs=1): err= 0: pid=2962838: Sun Jul 14 04:52:03 2024 00:35:45.219 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10154msec) 00:35:45.219 slat (nsec): min=7837, max=68345, avg=22235.87, stdev=11896.42 00:35:45.219 clat (msec): min=164, max=407, avg=241.55, stdev=37.79 00:35:45.219 lat (msec): min=164, max=407, avg=241.58, stdev=37.79 00:35:45.219 clat percentiles (msec): 00:35:45.219 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 192], 20.00th=[ 211], 00:35:45.219 | 30.00th=[ 226], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:35:45.219 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 284], 95.00th=[ 292], 00:35:45.219 | 99.00th=[ 347], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:35:45.219 | 99.99th=[ 409] 00:35:45.219 bw ( KiB/s): min= 128, max= 384, per=4.05%, avg=262.40, stdev=50.44, samples=20 00:35:45.219 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:35:45.219 lat (msec) : 250=65.03%, 500=34.97% 00:35:45.219 cpu : usr=97.84%, sys=1.64%, ctx=41, majf=0, minf=9 00:35:45.219 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:45.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.219 filename1: (groupid=0, jobs=1): err= 0: pid=2962839: Sun Jul 14 04:52:03 2024 00:35:45.219 read: IOPS=64, BW=258KiB/s (264kB/s)(2616KiB/10143msec) 00:35:45.219 slat (usec): min=9, max=114, avg=49.74, stdev=24.40 00:35:45.219 clat (msec): min=143, max=418, avg=247.40, stdev=41.92 00:35:45.219 lat (msec): min=143, max=418, avg=247.45, stdev=41.91 00:35:45.219 clat percentiles (msec): 00:35:45.219 | 1.00th=[ 161], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 220], 00:35:45.219 | 30.00th=[ 230], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 253], 00:35:45.219 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 292], 95.00th=[ 342], 00:35:45.219 | 99.00th=[ 355], 99.50th=[ 384], 99.90th=[ 418], 99.95th=[ 418], 00:35:45.219 | 99.99th=[ 418] 00:35:45.219 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=255.20, stdev=59.07, samples=20 00:35:45.219 iops : min= 32, max= 96, avg=63.80, stdev=14.77, samples=20 00:35:45.219 lat (msec) : 250=59.94%, 500=40.06% 00:35:45.219 cpu : usr=98.24%, sys=1.33%, ctx=16, majf=0, minf=9 00:35:45.219 IO depths : 1=4.4%, 2=10.7%, 4=25.1%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:45.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.219 filename1: (groupid=0, jobs=1): err= 0: pid=2962840: Sun Jul 14 04:52:03 2024 00:35:45.219 read: IOPS=65, BW=264KiB/s (270kB/s)(2680KiB/10157msec) 00:35:45.219 slat (usec): min=11, max=337, avg=80.81, stdev=41.03 00:35:45.219 clat (msec): min=142, max=417, avg=241.65, stdev=44.40 00:35:45.219 lat (msec): min=142, max=417, avg=241.73, stdev=44.41 00:35:45.219 clat percentiles (msec): 00:35:45.219 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 207], 00:35:45.219 | 30.00th=[ 224], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:35:45.219 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 292], 95.00th=[ 321], 00:35:45.219 | 99.00th=[ 388], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:35:45.219 | 99.99th=[ 418] 00:35:45.219 bw ( KiB/s): min= 128, max= 384, per=4.04%, avg=261.60, stdev=50.67, samples=20 00:35:45.219 iops : min= 32, max= 96, avg=65.40, stdev=12.67, samples=20 00:35:45.219 lat (msec) : 250=63.28%, 500=36.72% 00:35:45.219 cpu : usr=94.72%, sys=2.76%, ctx=209, majf=0, minf=9 00:35:45.219 IO depths : 1=3.3%, 2=9.6%, 4=25.1%, 8=53.0%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:45.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.219 filename1: (groupid=0, jobs=1): err= 0: pid=2962841: Sun Jul 14 04:52:03 2024 00:35:45.219 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10142msec) 00:35:45.219 slat (nsec): min=8624, max=73297, avg=25331.42, stdev=12356.98 00:35:45.219 clat (msec): min=125, max=342, avg=247.15, stdev=34.76 00:35:45.219 lat (msec): min=125, max=342, avg=247.17, stdev=34.76 00:35:45.219 clat percentiles (msec): 00:35:45.219 | 1.00th=[ 159], 5.00th=[ 188], 10.00th=[ 205], 20.00th=[ 222], 00:35:45.219 | 30.00th=[ 232], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 257], 00:35:45.219 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 305], 00:35:45.219 | 99.00th=[ 317], 99.50th=[ 334], 99.90th=[ 342], 99.95th=[ 342], 00:35:45.219 | 99.99th=[ 342] 00:35:45.219 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=256.00, stdev=57.34, samples=20 00:35:45.219 iops : min= 32, max= 96, avg=64.00, stdev=14.33, samples=20 00:35:45.219 lat (msec) : 250=52.13%, 500=47.87% 00:35:45.219 cpu : usr=98.16%, sys=1.48%, ctx=17, majf=0, minf=9 00:35:45.219 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:35:45.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.219 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.219 filename1: (groupid=0, jobs=1): err= 0: pid=2962842: Sun Jul 14 04:52:03 2024 00:35:45.219 read: IOPS=73, BW=292KiB/s (299kB/s)(2944KiB/10082msec) 00:35:45.219 slat (nsec): min=3700, max=64845, avg=23973.57, stdev=8478.74 00:35:45.219 clat (msec): min=14, max=301, avg=218.96, stdev=60.44 00:35:45.219 lat (msec): min=14, max=301, avg=218.99, stdev=60.44 00:35:45.219 clat percentiles (msec): 00:35:45.219 | 1.00th=[ 15], 5.00th=[ 58], 10.00th=[ 163], 20.00th=[ 190], 00:35:45.219 | 30.00th=[ 209], 40.00th=[ 228], 50.00th=[ 234], 60.00th=[ 243], 00:35:45.219 | 70.00th=[ 251], 80.00th=[ 257], 90.00th=[ 271], 95.00th=[ 292], 00:35:45.219 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:35:45.219 | 99.99th=[ 300] 00:35:45.219 bw ( KiB/s): min= 128, max= 640, per=4.46%, avg=288.00, stdev=99.72, samples=20 00:35:45.219 iops : min= 32, max= 160, avg=72.00, stdev=24.93, samples=20 00:35:45.219 lat (msec) : 20=2.17%, 50=1.90%, 100=2.45%, 250=64.13%, 500=29.35% 00:35:45.219 cpu : usr=98.23%, sys=1.41%, ctx=13, majf=0, minf=9 00:35:45.219 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:45.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.220 filename1: (groupid=0, jobs=1): err= 0: pid=2962843: Sun Jul 14 04:52:03 2024 00:35:45.220 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10140msec) 00:35:45.220 slat (nsec): min=6270, max=99245, avg=40921.14, stdev=20943.76 00:35:45.220 clat (msec): min=186, max=320, avg=246.92, stdev=29.24 00:35:45.220 lat (msec): min=187, max=320, avg=246.96, stdev=29.23 00:35:45.220 clat percentiles (msec): 00:35:45.220 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 207], 20.00th=[ 224], 00:35:45.220 | 30.00th=[ 232], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 255], 00:35:45.220 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 305], 00:35:45.220 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 321], 00:35:45.220 | 99.99th=[ 321] 00:35:45.220 bw ( KiB/s): min= 128, max= 368, per=3.96%, avg=256.00, stdev=53.45, samples=20 00:35:45.220 iops : min= 32, max= 92, avg=64.00, stdev=13.36, samples=20 00:35:45.220 lat (msec) : 250=54.27%, 500=45.73% 00:35:45.220 cpu : usr=98.33%, sys=1.22%, ctx=11, majf=0, minf=9 00:35:45.220 IO depths : 1=0.8%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:35:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.220 filename1: (groupid=0, jobs=1): err= 0: pid=2962844: Sun Jul 14 04:52:03 2024 00:35:45.220 read: IOPS=72, BW=292KiB/s (299kB/s)(2944KiB/10086msec) 00:35:45.220 slat (usec): min=5, max=106, avg=46.82, stdev=28.85 00:35:45.220 clat (msec): min=18, max=377, avg=218.89, stdev=64.11 00:35:45.220 lat (msec): min=18, max=377, avg=218.94, stdev=64.11 00:35:45.220 clat percentiles (msec): 00:35:45.220 | 1.00th=[ 19], 5.00th=[ 57], 10.00th=[ 142], 20.00th=[ 184], 00:35:45.220 | 30.00th=[ 197], 40.00th=[ 224], 50.00th=[ 234], 60.00th=[ 243], 00:35:45.220 | 70.00th=[ 251], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 296], 00:35:45.220 | 99.00th=[ 351], 99.50th=[ 368], 99.90th=[ 380], 99.95th=[ 380], 00:35:45.220 | 99.99th=[ 380] 00:35:45.220 bw ( KiB/s): min= 144, max= 625, per=4.46%, avg=288.05, stdev=96.04, samples=20 00:35:45.220 iops : min= 36, max= 156, avg=72.00, stdev=23.96, samples=20 00:35:45.220 lat (msec) : 20=2.17%, 50=2.17%, 100=2.17%, 250=63.86%, 500=29.62% 00:35:45.220 cpu : usr=98.11%, sys=1.44%, ctx=12, majf=0, minf=9 00:35:45.220 IO depths : 1=1.9%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:35:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.220 filename1: (groupid=0, jobs=1): err= 0: pid=2962845: Sun Jul 14 04:52:03 2024 00:35:45.220 read: IOPS=63, BW=255KiB/s (261kB/s)(2560KiB/10047msec) 00:35:45.220 slat (nsec): min=8671, max=97301, avg=34987.40, stdev=24750.66 00:35:45.220 clat (msec): min=188, max=388, avg=250.84, stdev=36.40 00:35:45.220 lat (msec): min=188, max=388, avg=250.87, stdev=36.39 00:35:45.220 clat percentiles (msec): 00:35:45.220 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 209], 20.00th=[ 224], 00:35:45.220 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 253], 00:35:45.220 | 70.00th=[ 266], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 296], 00:35:45.220 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:35:45.220 | 99.99th=[ 388] 00:35:45.220 bw ( KiB/s): min= 128, max= 384, per=3.85%, avg=249.60, stdev=65.33, samples=20 00:35:45.220 iops : min= 32, max= 96, avg=62.40, stdev=16.33, samples=20 00:35:45.220 lat (msec) : 250=55.00%, 500=45.00% 00:35:45.220 cpu : usr=98.08%, sys=1.42%, ctx=39, majf=0, minf=9 00:35:45.220 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.220 filename2: (groupid=0, jobs=1): err= 0: pid=2962846: Sun Jul 14 04:52:03 2024 00:35:45.220 read: IOPS=65, BW=264KiB/s (270kB/s)(2680KiB/10157msec) 00:35:45.220 slat (nsec): min=11266, max=79829, avg=28825.27, stdev=8552.88 00:35:45.220 clat (msec): min=168, max=407, avg=242.02, stdev=36.56 00:35:45.220 lat (msec): min=168, max=407, avg=242.05, stdev=36.57 00:35:45.220 clat percentiles (msec): 00:35:45.220 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 192], 20.00th=[ 211], 00:35:45.220 | 30.00th=[ 226], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:35:45.220 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 284], 95.00th=[ 296], 00:35:45.220 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 409], 99.95th=[ 409], 00:35:45.220 | 99.99th=[ 409] 00:35:45.220 bw ( KiB/s): min= 128, max= 384, per=4.04%, avg=261.60, stdev=50.67, samples=20 00:35:45.220 iops : min= 32, max= 96, avg=65.40, stdev=12.67, samples=20 00:35:45.220 lat (msec) : 250=64.18%, 500=35.82% 00:35:45.220 cpu : usr=98.30%, sys=1.20%, ctx=31, majf=0, minf=9 00:35:45.220 IO depths : 1=6.0%, 2=12.2%, 4=25.1%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.220 filename2: (groupid=0, jobs=1): err= 0: pid=2962847: Sun Jul 14 04:52:03 2024 00:35:45.220 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10136msec) 00:35:45.220 slat (usec): min=12, max=195, avg=70.69, stdev=21.55 00:35:45.220 clat (msec): min=137, max=370, avg=246.60, stdev=37.28 00:35:45.220 lat (msec): min=137, max=370, avg=246.67, stdev=37.28 00:35:45.220 clat percentiles (msec): 00:35:45.220 | 1.00th=[ 157], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 220], 00:35:45.220 | 30.00th=[ 230], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 257], 00:35:45.220 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 300], 95.00th=[ 313], 00:35:45.220 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:35:45.220 | 99.99th=[ 372] 00:35:45.220 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=256.00, stdev=57.10, samples=20 00:35:45.220 iops : min= 32, max= 96, avg=64.00, stdev=14.28, samples=20 00:35:45.220 lat (msec) : 250=53.35%, 500=46.65% 00:35:45.220 cpu : usr=96.29%, sys=2.08%, ctx=38, majf=0, minf=9 00:35:45.220 IO depths : 1=3.7%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.220 filename2: (groupid=0, jobs=1): err= 0: pid=2962848: Sun Jul 14 04:52:03 2024 00:35:45.220 read: IOPS=69, BW=279KiB/s (286kB/s)(2816KiB/10082msec) 00:35:45.220 slat (usec): min=4, max=303, avg=41.97, stdev=41.18 00:35:45.220 clat (msec): min=14, max=417, avg=227.34, stdev=65.13 00:35:45.220 lat (msec): min=14, max=417, avg=227.38, stdev=65.14 00:35:45.220 clat percentiles (msec): 00:35:45.220 | 1.00th=[ 15], 5.00th=[ 57], 10.00th=[ 169], 20.00th=[ 194], 00:35:45.220 | 30.00th=[ 222], 40.00th=[ 234], 50.00th=[ 241], 60.00th=[ 247], 00:35:45.220 | 70.00th=[ 255], 80.00th=[ 266], 90.00th=[ 288], 95.00th=[ 296], 00:35:45.220 | 99.00th=[ 347], 99.50th=[ 384], 99.90th=[ 418], 99.95th=[ 418], 00:35:45.220 | 99.99th=[ 418] 00:35:45.220 bw ( KiB/s): min= 128, max= 640, per=4.25%, avg=275.20, stdev=101.27, samples=20 00:35:45.220 iops : min= 32, max= 160, avg=68.80, stdev=25.32, samples=20 00:35:45.220 lat (msec) : 20=2.27%, 50=2.27%, 100=2.27%, 250=55.40%, 500=37.78% 00:35:45.220 cpu : usr=94.89%, sys=2.86%, ctx=84, majf=0, minf=9 00:35:45.220 IO depths : 1=1.0%, 2=7.1%, 4=24.7%, 8=55.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:35:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.220 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.220 filename2: (groupid=0, jobs=1): err= 0: pid=2962849: Sun Jul 14 04:52:03 2024 00:35:45.220 read: IOPS=79, BW=319KiB/s (327kB/s)(3248KiB/10172msec) 00:35:45.221 slat (usec): min=4, max=263, avg=44.33, stdev=36.39 00:35:45.221 clat (msec): min=18, max=347, avg=199.18, stdev=53.73 00:35:45.221 lat (msec): min=18, max=347, avg=199.23, stdev=53.75 00:35:45.221 clat percentiles (msec): 00:35:45.221 | 1.00th=[ 19], 5.00th=[ 57], 10.00th=[ 153], 20.00th=[ 174], 00:35:45.221 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 205], 60.00th=[ 222], 00:35:45.221 | 70.00th=[ 234], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 253], 00:35:45.221 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 347], 99.95th=[ 347], 00:35:45.221 | 99.99th=[ 347] 00:35:45.221 bw ( KiB/s): min= 256, max= 625, per=4.92%, avg=318.45, stdev=88.87, samples=20 00:35:45.221 iops : min= 64, max= 156, avg=79.60, stdev=22.17, samples=20 00:35:45.221 lat (msec) : 20=1.97%, 50=1.97%, 100=1.97%, 250=84.85%, 500=9.24% 00:35:45.221 cpu : usr=97.28%, sys=1.82%, ctx=52, majf=0, minf=9 00:35:45.221 IO depths : 1=2.7%, 2=7.0%, 4=19.3%, 8=61.1%, 16=9.9%, 32=0.0%, >=64=0.0% 00:35:45.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 issued rwts: total=812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.221 filename2: (groupid=0, jobs=1): err= 0: pid=2962850: Sun Jul 14 04:52:03 2024 00:35:45.221 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10157msec) 00:35:45.221 slat (usec): min=12, max=260, avg=31.96, stdev=27.66 00:35:45.221 clat (msec): min=142, max=407, avg=241.46, stdev=43.97 00:35:45.221 lat (msec): min=142, max=407, avg=241.49, stdev=43.97 00:35:45.221 clat percentiles (msec): 00:35:45.221 | 1.00th=[ 159], 5.00th=[ 171], 10.00th=[ 180], 20.00th=[ 207], 00:35:45.221 | 30.00th=[ 224], 40.00th=[ 232], 50.00th=[ 243], 60.00th=[ 247], 00:35:45.221 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 292], 95.00th=[ 321], 00:35:45.221 | 99.00th=[ 368], 99.50th=[ 388], 99.90th=[ 409], 99.95th=[ 409], 00:35:45.221 | 99.99th=[ 409] 00:35:45.221 bw ( KiB/s): min= 144, max= 384, per=4.05%, avg=262.40, stdev=46.55, samples=20 00:35:45.221 iops : min= 36, max= 96, avg=65.60, stdev=11.64, samples=20 00:35:45.221 lat (msec) : 250=63.39%, 500=36.61% 00:35:45.221 cpu : usr=94.89%, sys=2.55%, ctx=77, majf=0, minf=9 00:35:45.221 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:45.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.221 filename2: (groupid=0, jobs=1): err= 0: pid=2962851: Sun Jul 14 04:52:03 2024 00:35:45.221 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10137msec) 00:35:45.221 slat (usec): min=6, max=117, avg=75.45, stdev=15.53 00:35:45.221 clat (msec): min=143, max=406, avg=246.59, stdev=42.68 00:35:45.221 lat (msec): min=143, max=406, avg=246.67, stdev=42.68 00:35:45.221 clat percentiles (msec): 00:35:45.221 | 1.00th=[ 159], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 220], 00:35:45.221 | 30.00th=[ 226], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 249], 00:35:45.221 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 292], 95.00th=[ 338], 00:35:45.221 | 99.00th=[ 388], 99.50th=[ 405], 99.90th=[ 409], 99.95th=[ 409], 00:35:45.221 | 99.99th=[ 409] 00:35:45.221 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=256.00, stdev=55.91, samples=20 00:35:45.221 iops : min= 32, max= 96, avg=64.00, stdev=13.98, samples=20 00:35:45.221 lat (msec) : 250=61.59%, 500=38.41% 00:35:45.221 cpu : usr=96.80%, sys=2.09%, ctx=43, majf=0, minf=9 00:35:45.221 IO depths : 1=4.7%, 2=10.5%, 4=24.5%, 8=52.4%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:45.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.221 filename2: (groupid=0, jobs=1): err= 0: pid=2962852: Sun Jul 14 04:52:03 2024 00:35:45.221 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10157msec) 00:35:45.221 slat (nsec): min=11276, max=62634, avg=29425.85, stdev=8806.31 00:35:45.221 clat (msec): min=143, max=344, avg=241.50, stdev=36.60 00:35:45.221 lat (msec): min=143, max=344, avg=241.53, stdev=36.60 00:35:45.221 clat percentiles (msec): 00:35:45.221 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 190], 20.00th=[ 211], 00:35:45.221 | 30.00th=[ 226], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:35:45.221 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 284], 95.00th=[ 292], 00:35:45.221 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:35:45.221 | 99.99th=[ 347] 00:35:45.221 bw ( KiB/s): min= 128, max= 384, per=4.05%, avg=262.40, stdev=50.44, samples=20 00:35:45.221 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:35:45.221 lat (msec) : 250=63.99%, 500=36.01% 00:35:45.221 cpu : usr=96.94%, sys=2.04%, ctx=125, majf=0, minf=9 00:35:45.221 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:45.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.221 filename2: (groupid=0, jobs=1): err= 0: pid=2962853: Sun Jul 14 04:52:03 2024 00:35:45.221 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10135msec) 00:35:45.221 slat (usec): min=4, max=229, avg=68.90, stdev=30.65 00:35:45.221 clat (msec): min=187, max=313, avg=246.52, stdev=29.33 00:35:45.221 lat (msec): min=187, max=313, avg=246.59, stdev=29.34 00:35:45.221 clat percentiles (msec): 00:35:45.221 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 207], 20.00th=[ 222], 00:35:45.221 | 30.00th=[ 232], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:35:45.221 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 300], 00:35:45.221 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:35:45.221 | 99.99th=[ 313] 00:35:45.221 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=256.00, stdev=58.73, samples=20 00:35:45.221 iops : min= 32, max= 96, avg=64.00, stdev=14.68, samples=20 00:35:45.221 lat (msec) : 250=53.66%, 500=46.34% 00:35:45.221 cpu : usr=95.19%, sys=2.60%, ctx=53, majf=0, minf=9 00:35:45.221 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:45.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.221 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:45.221 00:35:45.221 Run status group 0 (all jobs): 00:35:45.221 READ: bw=6464KiB/s (6619kB/s), 255KiB/s-330KiB/s (261kB/s-338kB/s), io=64.2MiB (67.3MB), run=10047-10172msec 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:45.221 04:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 bdev_null0 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 [2024-07-14 04:52:04.066384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 bdev_null1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:45.222 { 00:35:45.222 "params": { 00:35:45.222 "name": "Nvme$subsystem", 00:35:45.222 "trtype": "$TEST_TRANSPORT", 00:35:45.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.222 "adrfam": "ipv4", 00:35:45.222 "trsvcid": "$NVMF_PORT", 00:35:45.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.222 "hdgst": ${hdgst:-false}, 00:35:45.222 "ddgst": ${ddgst:-false} 00:35:45.222 }, 00:35:45.222 "method": "bdev_nvme_attach_controller" 00:35:45.222 } 00:35:45.222 EOF 00:35:45.222 )") 00:35:45.222 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:45.223 { 00:35:45.223 "params": { 00:35:45.223 "name": "Nvme$subsystem", 00:35:45.223 "trtype": "$TEST_TRANSPORT", 00:35:45.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.223 "adrfam": "ipv4", 00:35:45.223 "trsvcid": "$NVMF_PORT", 00:35:45.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.223 "hdgst": ${hdgst:-false}, 00:35:45.223 "ddgst": ${ddgst:-false} 00:35:45.223 }, 00:35:45.223 "method": "bdev_nvme_attach_controller" 00:35:45.223 } 00:35:45.223 EOF 00:35:45.223 )") 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:45.223 "params": { 00:35:45.223 "name": "Nvme0", 00:35:45.223 "trtype": "tcp", 00:35:45.223 "traddr": "10.0.0.2", 00:35:45.223 "adrfam": "ipv4", 00:35:45.223 "trsvcid": "4420", 00:35:45.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:45.223 "hdgst": false, 00:35:45.223 "ddgst": false 00:35:45.223 }, 00:35:45.223 "method": "bdev_nvme_attach_controller" 00:35:45.223 },{ 00:35:45.223 "params": { 00:35:45.223 "name": "Nvme1", 00:35:45.223 "trtype": "tcp", 00:35:45.223 "traddr": "10.0.0.2", 00:35:45.223 "adrfam": "ipv4", 00:35:45.223 "trsvcid": "4420", 00:35:45.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.223 "hdgst": false, 00:35:45.223 "ddgst": false 00:35:45.223 }, 00:35:45.223 "method": "bdev_nvme_attach_controller" 00:35:45.223 }' 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:45.223 04:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.223 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:45.223 ... 00:35:45.223 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:45.223 ... 00:35:45.223 fio-3.35 00:35:45.223 Starting 4 threads 00:35:45.223 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.479 00:35:50.479 filename0: (groupid=0, jobs=1): err= 0: pid=2964239: Sun Jul 14 04:52:10 2024 00:35:50.479 read: IOPS=1944, BW=15.2MiB/s (15.9MB/s)(76.0MiB/5002msec) 00:35:50.479 slat (nsec): min=4572, max=57115, avg=14397.31, stdev=6184.77 00:35:50.479 clat (usec): min=2072, max=6851, avg=4069.91, stdev=519.95 00:35:50.479 lat (usec): min=2091, max=6888, avg=4084.31, stdev=520.02 00:35:50.479 clat percentiles (usec): 00:35:50.479 | 1.00th=[ 3130], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3752], 00:35:50.479 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4015], 00:35:50.479 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4752], 95.00th=[ 5145], 00:35:50.479 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6521], 99.95th=[ 6718], 00:35:50.479 | 99.99th=[ 6849] 00:35:50.479 bw ( KiB/s): min=15008, max=16064, per=24.90%, avg=15553.40, stdev=351.10, samples=10 00:35:50.479 iops : min= 1876, max= 2008, avg=1944.10, stdev=43.96, samples=10 00:35:50.479 lat (msec) : 4=55.35%, 10=44.65% 00:35:50.479 cpu : usr=93.32%, sys=5.80%, ctx=20, majf=0, minf=9 00:35:50.479 IO depths : 1=0.3%, 2=2.3%, 4=70.0%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.479 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.479 issued rwts: total=9727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.479 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:50.479 filename0: (groupid=0, jobs=1): err= 0: pid=2964240: Sun Jul 14 04:52:10 2024 00:35:50.479 read: IOPS=1949, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5002msec) 00:35:50.479 slat (nsec): min=4307, max=56783, avg=13837.79, stdev=6532.21 00:35:50.479 clat (usec): min=1292, max=7348, avg=4062.64, stdev=690.05 00:35:50.479 lat (usec): min=1310, max=7372, avg=4076.48, stdev=689.99 00:35:50.479 clat percentiles (usec): 00:35:50.479 | 1.00th=[ 2835], 5.00th=[ 3261], 10.00th=[ 3458], 20.00th=[ 3687], 00:35:50.479 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 3982], 00:35:50.479 | 70.00th=[ 4047], 80.00th=[ 4178], 90.00th=[ 5145], 95.00th=[ 5735], 00:35:50.479 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6783], 99.95th=[ 6849], 00:35:50.479 | 99.99th=[ 7373] 00:35:50.479 bw ( KiB/s): min=15104, max=16080, per=24.97%, avg=15596.80, stdev=263.32, samples=10 00:35:50.479 iops : min= 1888, max= 2010, avg=1949.60, stdev=32.91, samples=10 00:35:50.479 lat (msec) : 2=0.04%, 4=61.05%, 10=38.91% 00:35:50.479 cpu : usr=94.52%, sys=4.90%, ctx=37, majf=0, minf=0 00:35:50.479 IO depths : 1=0.1%, 2=1.4%, 4=69.5%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.479 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.479 issued rwts: total=9753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.479 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:50.479 filename1: (groupid=0, jobs=1): err= 0: pid=2964241: Sun Jul 14 04:52:10 2024 00:35:50.479 read: IOPS=1930, BW=15.1MiB/s (15.8MB/s)(75.4MiB/5001msec) 00:35:50.479 slat (nsec): min=4429, max=49881, avg=11896.27, stdev=4879.34 00:35:50.479 clat (usec): min=2162, max=8626, avg=4106.58, stdev=623.17 00:35:50.479 lat (usec): min=2170, max=8639, avg=4118.48, stdev=623.04 00:35:50.479 clat percentiles (usec): 00:35:50.479 | 1.00th=[ 3032], 5.00th=[ 3392], 10.00th=[ 3589], 20.00th=[ 3752], 00:35:50.479 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4047], 00:35:50.479 | 70.00th=[ 4113], 80.00th=[ 4359], 90.00th=[ 4883], 95.00th=[ 5538], 00:35:50.479 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7635], 99.95th=[ 8094], 00:35:50.479 | 99.99th=[ 8586] 00:35:50.479 bw ( KiB/s): min=15104, max=16176, per=24.80%, avg=15491.56, stdev=327.20, samples=9 00:35:50.479 iops : min= 1888, max= 2022, avg=1936.44, stdev=40.90, samples=9 00:35:50.479 lat (msec) : 4=54.04%, 10=45.96% 00:35:50.479 cpu : usr=94.70%, sys=4.70%, ctx=6, majf=0, minf=9 00:35:50.479 IO depths : 1=0.1%, 2=3.7%, 4=68.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.479 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.479 issued rwts: total=9653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.480 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:50.480 filename1: (groupid=0, jobs=1): err= 0: pid=2964242: Sun Jul 14 04:52:10 2024 00:35:50.480 read: IOPS=1982, BW=15.5MiB/s (16.2MB/s)(77.5MiB/5002msec) 00:35:50.480 slat (nsec): min=4471, max=57177, avg=12059.34, stdev=5616.90 00:35:50.480 clat (usec): min=1620, max=8726, avg=3995.81, stdev=597.23 00:35:50.480 lat (usec): min=1633, max=8739, avg=4007.86, stdev=597.11 00:35:50.480 clat percentiles (usec): 00:35:50.480 | 1.00th=[ 2769], 5.00th=[ 3195], 10.00th=[ 3425], 20.00th=[ 3654], 00:35:50.480 | 30.00th=[ 3785], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4015], 00:35:50.480 | 70.00th=[ 4047], 80.00th=[ 4178], 90.00th=[ 4555], 95.00th=[ 5276], 00:35:50.480 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 6980], 99.95th=[ 7111], 00:35:50.480 | 99.99th=[ 8717] 00:35:50.480 bw ( KiB/s): min=15216, max=16304, per=25.40%, avg=15865.50, stdev=385.67, samples=10 00:35:50.480 iops : min= 1902, max= 2038, avg=1983.10, stdev=48.23, samples=10 00:35:50.480 lat (msec) : 2=0.01%, 4=59.72%, 10=40.27% 00:35:50.480 cpu : usr=95.18%, sys=4.28%, ctx=13, majf=0, minf=9 00:35:50.480 IO depths : 1=0.1%, 2=4.4%, 4=68.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.480 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.480 issued rwts: total=9917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.480 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:50.480 00:35:50.480 Run status group 0 (all jobs): 00:35:50.480 READ: bw=61.0MiB/s (64.0MB/s), 15.1MiB/s-15.5MiB/s (15.8MB/s-16.2MB/s), io=305MiB (320MB), run=5001-5002msec 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.480 00:35:50.480 real 0m24.062s 00:35:50.480 user 4m33.273s 00:35:50.480 sys 0m7.289s 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 ************************************ 00:35:50.480 END TEST fio_dif_rand_params 00:35:50.480 ************************************ 00:35:50.480 04:52:10 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:50.480 04:52:10 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:50.480 04:52:10 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 ************************************ 00:35:50.480 START TEST fio_dif_digest 00:35:50.480 ************************************ 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 bdev_null0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.480 [2024-07-14 04:52:10.392008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:50.480 { 00:35:50.480 "params": { 00:35:50.480 "name": "Nvme$subsystem", 00:35:50.480 "trtype": "$TEST_TRANSPORT", 00:35:50.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.480 "adrfam": "ipv4", 00:35:50.480 "trsvcid": "$NVMF_PORT", 00:35:50.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.480 "hdgst": ${hdgst:-false}, 00:35:50.480 "ddgst": ${ddgst:-false} 00:35:50.480 }, 00:35:50.480 "method": "bdev_nvme_attach_controller" 00:35:50.480 } 00:35:50.480 EOF 00:35:50.480 )") 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:50.480 "params": { 00:35:50.480 "name": "Nvme0", 00:35:50.480 "trtype": "tcp", 00:35:50.480 "traddr": "10.0.0.2", 00:35:50.480 "adrfam": "ipv4", 00:35:50.480 "trsvcid": "4420", 00:35:50.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.480 "hdgst": true, 00:35:50.480 "ddgst": true 00:35:50.480 }, 00:35:50.480 "method": "bdev_nvme_attach_controller" 00:35:50.480 }' 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:50.480 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.481 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.481 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:50.481 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:50.481 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:50.481 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:50.481 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:50.481 04:52:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.481 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:50.481 ... 00:35:50.481 fio-3.35 00:35:50.481 Starting 3 threads 00:35:50.738 EAL: No free 2048 kB hugepages reported on node 1 00:36:02.929 00:36:02.929 filename0: (groupid=0, jobs=1): err= 0: pid=2965108: Sun Jul 14 04:52:21 2024 00:36:02.929 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(270MiB/10048msec) 00:36:02.929 slat (nsec): min=5253, max=39573, avg=16902.75, stdev=2445.50 00:36:02.929 clat (usec): min=8484, max=57684, avg=13933.35, stdev=3308.28 00:36:02.929 lat (usec): min=8498, max=57698, avg=13950.26, stdev=3308.50 00:36:02.929 clat percentiles (usec): 00:36:02.929 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[11469], 20.00th=[12911], 00:36:02.929 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:36:02.929 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:36:02.929 | 99.00th=[16712], 99.50th=[47449], 99.90th=[56361], 99.95th=[57410], 00:36:02.929 | 99.99th=[57934] 00:36:02.929 bw ( KiB/s): min=25344, max=28928, per=34.68%, avg=27573.90, stdev=953.40, samples=20 00:36:02.929 iops : min= 198, max= 226, avg=215.40, stdev= 7.46, samples=20 00:36:02.929 lat (msec) : 10=3.85%, 20=95.64%, 50=0.05%, 100=0.46% 00:36:02.929 cpu : usr=91.45%, sys=7.94%, ctx=19, majf=0, minf=140 00:36:02.929 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.929 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.929 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.929 filename0: (groupid=0, jobs=1): err= 0: pid=2965109: Sun Jul 14 04:52:21 2024 00:36:02.929 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(271MiB/10047msec) 00:36:02.929 slat (nsec): min=4724, max=31267, avg=14233.64, stdev=1250.48 00:36:02.929 clat (usec): min=7970, max=56085, avg=13857.49, stdev=3885.04 00:36:02.929 lat (usec): min=7984, max=56099, avg=13871.72, stdev=3885.04 00:36:02.929 clat percentiles (usec): 00:36:02.929 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[11600], 20.00th=[12780], 00:36:02.929 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:36:02.929 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:36:02.929 | 99.00th=[16712], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:36:02.929 | 99.99th=[55837] 00:36:02.929 bw ( KiB/s): min=24832, max=31488, per=34.88%, avg=27737.60, stdev=1598.23, samples=20 00:36:02.929 iops : min= 194, max= 246, avg=216.70, stdev=12.49, samples=20 00:36:02.929 lat (msec) : 10=4.20%, 20=95.02%, 50=0.05%, 100=0.74% 00:36:02.929 cpu : usr=92.44%, sys=7.07%, ctx=16, majf=0, minf=156 00:36:02.929 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.929 issued rwts: total=2169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.929 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.929 filename0: (groupid=0, jobs=1): err= 0: pid=2965110: Sun Jul 14 04:52:21 2024 00:36:02.929 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(240MiB/10050msec) 00:36:02.929 slat (nsec): min=5768, max=31096, avg=14392.06, stdev=1706.61 00:36:02.929 clat (usec): min=9520, max=66349, avg=15685.11, stdev=6430.83 00:36:02.929 lat (usec): min=9534, max=66373, avg=15699.50, stdev=6430.82 00:36:02.929 clat percentiles (usec): 00:36:02.929 | 1.00th=[10290], 5.00th=[12518], 10.00th=[13304], 20.00th=[13960], 00:36:02.929 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:36:02.929 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:36:02.929 | 99.00th=[56361], 99.50th=[57410], 99.90th=[66323], 99.95th=[66323], 00:36:02.929 | 99.99th=[66323] 00:36:02.929 bw ( KiB/s): min=20224, max=27136, per=30.83%, avg=24514.30, stdev=2102.61, samples=20 00:36:02.929 iops : min= 158, max= 212, avg=191.50, stdev=16.44, samples=20 00:36:02.929 lat (msec) : 10=0.37%, 20=97.34%, 50=0.05%, 100=2.24% 00:36:02.929 cpu : usr=93.45%, sys=6.05%, ctx=11, majf=0, minf=100 00:36:02.929 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.929 issued rwts: total=1917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.929 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:02.929 00:36:02.929 Run status group 0 (all jobs): 00:36:02.929 READ: bw=77.6MiB/s (81.4MB/s), 23.8MiB/s-27.0MiB/s (25.0MB/s-28.3MB/s), io=780MiB (818MB), run=10047-10050msec 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.929 00:36:02.929 real 0m10.967s 00:36:02.929 user 0m28.710s 00:36:02.929 sys 0m2.381s 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:02.929 04:52:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:02.929 ************************************ 00:36:02.929 END TEST fio_dif_digest 00:36:02.929 ************************************ 00:36:02.929 04:52:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:02.929 04:52:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:02.929 rmmod nvme_tcp 00:36:02.929 rmmod nvme_fabrics 00:36:02.929 rmmod nvme_keyring 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2958938 ']' 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2958938 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 2958938 ']' 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 2958938 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2958938 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2958938' 00:36:02.929 killing process with pid 2958938 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@965 -- # kill 2958938 00:36:02.929 04:52:21 nvmf_dif -- common/autotest_common.sh@970 -- # wait 2958938 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:02.929 04:52:21 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:02.929 Waiting for block devices as requested 00:36:02.929 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:02.929 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:02.929 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:02.929 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:03.187 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:03.187 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:03.187 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:03.187 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:03.444 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:03.444 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:03.444 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:03.444 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:03.703 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:03.703 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:03.703 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:03.703 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:03.961 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:03.961 04:52:24 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:03.961 04:52:24 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:03.961 04:52:24 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:03.961 04:52:24 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:03.961 04:52:24 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.961 04:52:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:03.961 04:52:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.526 04:52:26 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:06.526 00:36:06.526 real 1m6.339s 00:36:06.526 user 6m29.472s 00:36:06.526 sys 0m18.855s 00:36:06.526 04:52:26 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:06.526 04:52:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:06.526 ************************************ 00:36:06.526 END TEST nvmf_dif 00:36:06.526 ************************************ 00:36:06.526 04:52:26 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:06.526 04:52:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:06.526 04:52:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:06.526 04:52:26 -- common/autotest_common.sh@10 -- # set +x 00:36:06.526 ************************************ 00:36:06.526 START TEST nvmf_abort_qd_sizes 00:36:06.526 ************************************ 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:06.526 * Looking for test storage... 00:36:06.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:06.526 04:52:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:08.425 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:08.425 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:08.425 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:08.425 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:08.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:36:08.425 00:36:08.425 --- 10.0.0.2 ping statistics --- 00:36:08.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.425 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:36:08.425 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:36:08.425 00:36:08.425 --- 10.0.0.1 ping statistics --- 00:36:08.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.426 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:36:08.426 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.426 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:08.426 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:08.426 04:52:28 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:09.359 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:09.360 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:09.360 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:09.360 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:09.360 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:09.360 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:09.360 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:09.360 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:09.360 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:09.360 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:09.360 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:09.360 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:09.360 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:09.360 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:09.617 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:09.617 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:10.549 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2969897 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2969897 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 2969897 ']' 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:10.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:10.549 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:10.549 [2024-07-14 04:52:30.660505] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:10.549 [2024-07-14 04:52:30.660600] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:10.549 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.549 [2024-07-14 04:52:30.726578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:10.806 [2024-07-14 04:52:30.818321] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:10.806 [2024-07-14 04:52:30.818378] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:10.806 [2024-07-14 04:52:30.818407] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:10.806 [2024-07-14 04:52:30.818419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:10.806 [2024-07-14 04:52:30.818428] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:10.806 [2024-07-14 04:52:30.818512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.806 [2024-07-14 04:52:30.818577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:10.806 [2024-07-14 04:52:30.818643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:10.806 [2024-07-14 04:52:30.818645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:10.806 04:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:11.062 ************************************ 00:36:11.062 START TEST spdk_target_abort 00:36:11.062 ************************************ 00:36:11.062 04:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:11.062 04:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:11.062 04:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:11.062 04:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.062 04:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:14.333 spdk_targetn1 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:14.333 [2024-07-14 04:52:33.843793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:14.333 [2024-07-14 04:52:33.876070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:14.333 04:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:14.333 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.856 Initializing NVMe Controllers 00:36:16.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:16.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:16.856 Initialization complete. Launching workers. 00:36:16.856 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8188, failed: 0 00:36:16.856 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 6968 00:36:16.856 success 808, unsuccess 412, failed 0 00:36:16.856 04:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:17.114 04:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:17.114 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.390 Initializing NVMe Controllers 00:36:20.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:20.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:20.391 Initialization complete. Launching workers. 00:36:20.391 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8773, failed: 0 00:36:20.391 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1217, failed to submit 7556 00:36:20.391 success 343, unsuccess 874, failed 0 00:36:20.391 04:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:20.391 04:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:20.391 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.665 Initializing NVMe Controllers 00:36:23.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:23.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:23.665 Initialization complete. Launching workers. 00:36:23.665 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29125, failed: 0 00:36:23.665 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2716, failed to submit 26409 00:36:23.665 success 511, unsuccess 2205, failed 0 00:36:23.665 04:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:23.665 04:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.665 04:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:23.665 04:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.665 04:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:23.665 04:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.665 04:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2969897 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 2969897 ']' 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 2969897 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2969897 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2969897' 00:36:25.071 killing process with pid 2969897 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 2969897 00:36:25.071 04:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 2969897 00:36:25.071 00:36:25.071 real 0m14.132s 00:36:25.071 user 0m53.526s 00:36:25.071 sys 0m2.599s 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.071 ************************************ 00:36:25.071 END TEST spdk_target_abort 00:36:25.071 ************************************ 00:36:25.071 04:52:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:25.071 04:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:25.071 04:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:25.071 04:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:25.071 ************************************ 00:36:25.071 START TEST kernel_target_abort 00:36:25.071 ************************************ 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:25.071 04:52:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:26.447 Waiting for block devices as requested 00:36:26.447 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:26.447 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:26.447 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:26.705 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:26.705 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:26.705 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:26.705 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:26.964 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:26.964 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:26.964 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:26.964 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:27.224 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:27.224 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:27.224 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:27.224 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:27.483 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:27.483 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:27.483 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:27.483 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:27.483 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:27.483 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:27.483 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:27.483 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:27.742 No valid GPT data, bailing 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:27.742 00:36:27.742 Discovery Log Number of Records 2, Generation counter 2 00:36:27.742 =====Discovery Log Entry 0====== 00:36:27.742 trtype: tcp 00:36:27.742 adrfam: ipv4 00:36:27.742 subtype: current discovery subsystem 00:36:27.742 treq: not specified, sq flow control disable supported 00:36:27.742 portid: 1 00:36:27.742 trsvcid: 4420 00:36:27.742 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:27.742 traddr: 10.0.0.1 00:36:27.742 eflags: none 00:36:27.742 sectype: none 00:36:27.742 =====Discovery Log Entry 1====== 00:36:27.742 trtype: tcp 00:36:27.742 adrfam: ipv4 00:36:27.742 subtype: nvme subsystem 00:36:27.742 treq: not specified, sq flow control disable supported 00:36:27.742 portid: 1 00:36:27.742 trsvcid: 4420 00:36:27.742 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:27.742 traddr: 10.0.0.1 00:36:27.742 eflags: none 00:36:27.742 sectype: none 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:27.742 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.743 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.743 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:27.743 04:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.743 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.024 Initializing NVMe Controllers 00:36:31.024 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:31.024 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:31.024 Initialization complete. Launching workers. 00:36:31.024 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27286, failed: 0 00:36:31.024 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27286, failed to submit 0 00:36:31.024 success 0, unsuccess 27286, failed 0 00:36:31.024 04:52:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:31.024 04:52:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.024 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.297 Initializing NVMe Controllers 00:36:34.298 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:34.298 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:34.298 Initialization complete. Launching workers. 00:36:34.298 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56692, failed: 0 00:36:34.298 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14274, failed to submit 42418 00:36:34.298 success 0, unsuccess 14274, failed 0 00:36:34.298 04:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:34.298 04:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:34.298 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.574 Initializing NVMe Controllers 00:36:37.574 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:37.574 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:37.574 Initialization complete. Launching workers. 00:36:37.574 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55198, failed: 0 00:36:37.574 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13774, failed to submit 41424 00:36:37.574 success 0, unsuccess 13774, failed 0 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:37.574 04:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:38.139 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:38.139 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:38.399 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:38.399 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:38.399 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:38.399 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:38.399 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:38.399 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:38.399 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:38.399 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:38.399 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:38.399 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:38.399 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:38.399 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:38.399 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:38.399 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:39.334 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:39.334 00:36:39.334 real 0m14.315s 00:36:39.334 user 0m4.609s 00:36:39.334 sys 0m3.461s 00:36:39.334 04:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:39.334 04:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:39.334 ************************************ 00:36:39.334 END TEST kernel_target_abort 00:36:39.334 ************************************ 00:36:39.334 04:52:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:39.334 04:52:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:39.334 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:39.334 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:39.334 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:39.334 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:39.334 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:39.334 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:39.592 rmmod nvme_tcp 00:36:39.592 rmmod nvme_fabrics 00:36:39.592 rmmod nvme_keyring 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2969897 ']' 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2969897 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 2969897 ']' 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 2969897 00:36:39.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2969897) - No such process 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 2969897 is not found' 00:36:39.592 Process with pid 2969897 is not found 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:39.592 04:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:40.528 Waiting for block devices as requested 00:36:40.528 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:40.787 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:40.787 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:41.045 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:41.045 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:41.045 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:41.045 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:41.303 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:41.303 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:41.303 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:41.303 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:41.561 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:41.561 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:41.561 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:41.561 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:41.820 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:41.820 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:41.820 04:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:41.820 04:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:41.820 04:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:41.820 04:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:41.820 04:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.820 04:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:41.820 04:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.407 04:53:03 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:44.407 00:36:44.407 real 0m37.798s 00:36:44.407 user 1m0.198s 00:36:44.407 sys 0m9.449s 00:36:44.407 04:53:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:44.407 04:53:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.407 ************************************ 00:36:44.407 END TEST nvmf_abort_qd_sizes 00:36:44.407 ************************************ 00:36:44.407 04:53:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:44.407 04:53:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:44.407 04:53:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:44.407 04:53:04 -- common/autotest_common.sh@10 -- # set +x 00:36:44.407 ************************************ 00:36:44.407 START TEST keyring_file 00:36:44.407 ************************************ 00:36:44.407 04:53:04 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:44.407 * Looking for test storage... 00:36:44.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:44.407 04:53:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:44.407 04:53:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.407 04:53:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.407 04:53:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.407 04:53:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.407 04:53:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.407 04:53:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:44.407 04:53:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bl6QjPN9hS 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bl6QjPN9hS 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bl6QjPN9hS 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bl6QjPN9hS 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eQdfeCSlIV 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:44.407 04:53:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eQdfeCSlIV 00:36:44.407 04:53:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eQdfeCSlIV 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eQdfeCSlIV 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=2975651 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:44.407 04:53:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2975651 00:36:44.407 04:53:04 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2975651 ']' 00:36:44.407 04:53:04 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.408 04:53:04 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:44.408 04:53:04 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.408 04:53:04 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:44.408 04:53:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:44.408 [2024-07-14 04:53:04.241622] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:44.408 [2024-07-14 04:53:04.241714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975651 ] 00:36:44.408 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.408 [2024-07-14 04:53:04.297690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.408 [2024-07-14 04:53:04.382336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:44.664 04:53:04 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:44.664 [2024-07-14 04:53:04.638989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:44.664 null0 00:36:44.664 [2024-07-14 04:53:04.671040] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:44.664 [2024-07-14 04:53:04.671518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:44.664 [2024-07-14 04:53:04.679054] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.664 04:53:04 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:44.664 [2024-07-14 04:53:04.691070] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:44.664 request: 00:36:44.664 { 00:36:44.664 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.664 "secure_channel": false, 00:36:44.664 "listen_address": { 00:36:44.664 "trtype": "tcp", 00:36:44.664 "traddr": "127.0.0.1", 00:36:44.664 "trsvcid": "4420" 00:36:44.664 }, 00:36:44.664 "method": "nvmf_subsystem_add_listener", 00:36:44.664 "req_id": 1 00:36:44.664 } 00:36:44.664 Got JSON-RPC error response 00:36:44.664 response: 00:36:44.664 { 00:36:44.664 "code": -32602, 00:36:44.664 "message": "Invalid parameters" 00:36:44.664 } 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:44.664 04:53:04 keyring_file -- keyring/file.sh@46 -- # bperfpid=2975666 00:36:44.664 04:53:04 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2975666 /var/tmp/bperf.sock 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2975666 ']' 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:44.664 04:53:04 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:44.664 04:53:04 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:44.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:44.665 04:53:04 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:44.665 04:53:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:44.665 [2024-07-14 04:53:04.739448] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:44.665 [2024-07-14 04:53:04.739517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975666 ] 00:36:44.665 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.665 [2024-07-14 04:53:04.796551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.921 [2024-07-14 04:53:04.881660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.921 04:53:04 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:44.921 04:53:04 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:44.921 04:53:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bl6QjPN9hS 00:36:44.921 04:53:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bl6QjPN9hS 00:36:45.178 04:53:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eQdfeCSlIV 00:36:45.178 04:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eQdfeCSlIV 00:36:45.435 04:53:05 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:45.435 04:53:05 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:45.435 04:53:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.435 04:53:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.435 04:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.691 04:53:05 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.bl6QjPN9hS == \/\t\m\p\/\t\m\p\.\b\l\6\Q\j\P\N\9\h\S ]] 00:36:45.691 04:53:05 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:45.691 04:53:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:45.691 04:53:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.691 04:53:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:45.691 04:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.949 04:53:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eQdfeCSlIV == \/\t\m\p\/\t\m\p\.\e\Q\d\f\e\C\S\l\I\V ]] 00:36:45.949 04:53:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:45.949 04:53:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:45.949 04:53:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.949 04:53:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.949 04:53:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.949 04:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.206 04:53:06 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:46.206 04:53:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:46.206 04:53:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:46.206 04:53:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.206 04:53:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.206 04:53:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.206 04:53:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:46.464 04:53:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:46.464 04:53:06 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.464 04:53:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.722 [2024-07-14 04:53:06.744453] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:46.722 nvme0n1 00:36:46.722 04:53:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:46.722 04:53:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:46.722 04:53:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.722 04:53:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.722 04:53:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.722 04:53:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.979 04:53:07 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:46.979 04:53:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:46.979 04:53:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:46.979 04:53:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.979 04:53:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.979 04:53:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.979 04:53:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:47.236 04:53:07 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:47.236 04:53:07 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.236 Running I/O for 1 seconds... 00:36:48.606 00:36:48.606 Latency(us) 00:36:48.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.606 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:48.606 nvme0n1 : 1.03 4012.35 15.67 0.00 0.00 31452.57 4587.52 75730.49 00:36:48.606 =================================================================================================================== 00:36:48.606 Total : 4012.35 15.67 0.00 0.00 31452.57 4587.52 75730.49 00:36:48.606 0 00:36:48.606 04:53:08 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:48.606 04:53:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:48.606 04:53:08 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:48.606 04:53:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:48.606 04:53:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.606 04:53:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.606 04:53:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.606 04:53:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.864 04:53:08 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:48.864 04:53:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:48.864 04:53:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:48.864 04:53:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.864 04:53:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.864 04:53:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.864 04:53:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:49.122 04:53:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:49.122 04:53:09 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:49.122 04:53:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:49.122 04:53:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:49.122 04:53:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:49.122 04:53:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:49.122 04:53:09 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:49.122 04:53:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:49.122 04:53:09 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:49.122 04:53:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:49.380 [2024-07-14 04:53:09.461206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:49.380 [2024-07-14 04:53:09.461687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2006310 (107): Transport endpoint is not connected 00:36:49.380 [2024-07-14 04:53:09.462675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2006310 (9): Bad file descriptor 00:36:49.380 [2024-07-14 04:53:09.463673] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:49.380 [2024-07-14 04:53:09.463696] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:49.380 [2024-07-14 04:53:09.463712] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:49.380 request: 00:36:49.380 { 00:36:49.380 "name": "nvme0", 00:36:49.380 "trtype": "tcp", 00:36:49.380 "traddr": "127.0.0.1", 00:36:49.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.380 "adrfam": "ipv4", 00:36:49.380 "trsvcid": "4420", 00:36:49.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.380 "psk": "key1", 00:36:49.380 "method": "bdev_nvme_attach_controller", 00:36:49.380 "req_id": 1 00:36:49.380 } 00:36:49.380 Got JSON-RPC error response 00:36:49.380 response: 00:36:49.380 { 00:36:49.380 "code": -5, 00:36:49.380 "message": "Input/output error" 00:36:49.380 } 00:36:49.380 04:53:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:49.380 04:53:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:49.380 04:53:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:49.380 04:53:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:49.380 04:53:09 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:49.380 04:53:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.380 04:53:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.380 04:53:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.380 04:53:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.380 04:53:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.638 04:53:09 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:49.638 04:53:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:49.638 04:53:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:49.638 04:53:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.638 04:53:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.638 04:53:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:49.638 04:53:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.896 04:53:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:49.896 04:53:09 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:49.896 04:53:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:50.154 04:53:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:50.154 04:53:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:50.412 04:53:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:50.412 04:53:10 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:50.412 04:53:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.669 04:53:10 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:50.669 04:53:10 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.bl6QjPN9hS 00:36:50.669 04:53:10 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bl6QjPN9hS 00:36:50.669 04:53:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:50.669 04:53:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bl6QjPN9hS 00:36:50.669 04:53:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:50.670 04:53:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:50.670 04:53:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:50.670 04:53:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:50.670 04:53:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bl6QjPN9hS 00:36:50.670 04:53:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bl6QjPN9hS 00:36:50.926 [2024-07-14 04:53:11.001629] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bl6QjPN9hS': 0100660 00:36:50.926 [2024-07-14 04:53:11.001673] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:50.926 request: 00:36:50.926 { 00:36:50.926 "name": "key0", 00:36:50.926 "path": "/tmp/tmp.bl6QjPN9hS", 00:36:50.926 "method": "keyring_file_add_key", 00:36:50.926 "req_id": 1 00:36:50.926 } 00:36:50.926 Got JSON-RPC error response 00:36:50.926 response: 00:36:50.926 { 00:36:50.926 "code": -1, 00:36:50.926 "message": "Operation not permitted" 00:36:50.926 } 00:36:50.926 04:53:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:50.926 04:53:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:50.926 04:53:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:50.926 04:53:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:50.926 04:53:11 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.bl6QjPN9hS 00:36:50.926 04:53:11 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bl6QjPN9hS 00:36:50.926 04:53:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bl6QjPN9hS 00:36:51.183 04:53:11 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.bl6QjPN9hS 00:36:51.183 04:53:11 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:51.183 04:53:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:51.183 04:53:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:51.183 04:53:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.183 04:53:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.183 04:53:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:51.441 04:53:11 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:51.441 04:53:11 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.441 04:53:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:51.441 04:53:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.441 04:53:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:51.441 04:53:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.441 04:53:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:51.441 04:53:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.441 04:53:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.441 04:53:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.698 [2024-07-14 04:53:11.739657] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bl6QjPN9hS': No such file or directory 00:36:51.698 [2024-07-14 04:53:11.739698] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:51.698 [2024-07-14 04:53:11.739731] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:51.698 [2024-07-14 04:53:11.739745] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:51.698 [2024-07-14 04:53:11.739758] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:51.698 request: 00:36:51.698 { 00:36:51.698 "name": "nvme0", 00:36:51.698 "trtype": "tcp", 00:36:51.698 "traddr": "127.0.0.1", 00:36:51.698 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.698 "adrfam": "ipv4", 00:36:51.698 "trsvcid": "4420", 00:36:51.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.698 "psk": "key0", 00:36:51.698 "method": "bdev_nvme_attach_controller", 00:36:51.698 "req_id": 1 00:36:51.698 } 00:36:51.698 Got JSON-RPC error response 00:36:51.698 response: 00:36:51.698 { 00:36:51.698 "code": -19, 00:36:51.698 "message": "No such device" 00:36:51.698 } 00:36:51.698 04:53:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:51.698 04:53:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:51.698 04:53:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:51.698 04:53:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:51.698 04:53:11 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:51.698 04:53:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:51.955 04:53:12 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NRzyK6vOpD 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:51.955 04:53:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:51.955 04:53:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:51.955 04:53:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:51.955 04:53:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:51.955 04:53:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:51.955 04:53:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NRzyK6vOpD 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NRzyK6vOpD 00:36:51.955 04:53:12 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.NRzyK6vOpD 00:36:51.955 04:53:12 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NRzyK6vOpD 00:36:51.955 04:53:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NRzyK6vOpD 00:36:52.213 04:53:12 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:52.213 04:53:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:52.469 nvme0n1 00:36:52.469 04:53:12 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:52.469 04:53:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.469 04:53:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.469 04:53:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.469 04:53:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.469 04:53:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.727 04:53:12 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:52.727 04:53:12 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:52.727 04:53:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:52.986 04:53:13 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:52.986 04:53:13 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:52.986 04:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.986 04:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.986 04:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.246 04:53:13 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:53.246 04:53:13 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:53.246 04:53:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.246 04:53:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.246 04:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.246 04:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.246 04:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.505 04:53:13 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:53.505 04:53:13 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:53.505 04:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:53.761 04:53:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:53.761 04:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.761 04:53:13 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:54.018 04:53:14 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:54.018 04:53:14 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NRzyK6vOpD 00:36:54.018 04:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NRzyK6vOpD 00:36:54.275 04:53:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eQdfeCSlIV 00:36:54.275 04:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eQdfeCSlIV 00:36:54.531 04:53:14 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:54.531 04:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:54.789 nvme0n1 00:36:54.789 04:53:14 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:54.789 04:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:55.047 04:53:15 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:55.047 "subsystems": [ 00:36:55.047 { 00:36:55.047 "subsystem": "keyring", 00:36:55.047 "config": [ 00:36:55.047 { 00:36:55.047 "method": "keyring_file_add_key", 00:36:55.047 "params": { 00:36:55.047 "name": "key0", 00:36:55.047 "path": "/tmp/tmp.NRzyK6vOpD" 00:36:55.047 } 00:36:55.047 }, 00:36:55.047 { 00:36:55.047 "method": "keyring_file_add_key", 00:36:55.047 "params": { 00:36:55.047 "name": "key1", 00:36:55.047 "path": "/tmp/tmp.eQdfeCSlIV" 00:36:55.047 } 00:36:55.047 } 00:36:55.047 ] 00:36:55.047 }, 00:36:55.047 { 00:36:55.047 "subsystem": "iobuf", 00:36:55.047 "config": [ 00:36:55.047 { 00:36:55.047 "method": "iobuf_set_options", 00:36:55.047 "params": { 00:36:55.047 "small_pool_count": 8192, 00:36:55.047 "large_pool_count": 1024, 00:36:55.047 "small_bufsize": 8192, 00:36:55.047 "large_bufsize": 135168 00:36:55.047 } 00:36:55.047 } 00:36:55.047 ] 00:36:55.047 }, 00:36:55.047 { 00:36:55.047 "subsystem": "sock", 00:36:55.047 "config": [ 00:36:55.047 { 00:36:55.047 "method": "sock_set_default_impl", 00:36:55.047 "params": { 00:36:55.047 "impl_name": "posix" 00:36:55.047 } 00:36:55.047 }, 00:36:55.047 { 00:36:55.047 "method": "sock_impl_set_options", 00:36:55.047 "params": { 00:36:55.047 "impl_name": "ssl", 00:36:55.047 "recv_buf_size": 4096, 00:36:55.047 "send_buf_size": 4096, 00:36:55.047 "enable_recv_pipe": true, 00:36:55.047 "enable_quickack": false, 00:36:55.047 "enable_placement_id": 0, 00:36:55.047 "enable_zerocopy_send_server": true, 00:36:55.047 "enable_zerocopy_send_client": false, 00:36:55.047 "zerocopy_threshold": 0, 00:36:55.047 "tls_version": 0, 00:36:55.047 "enable_ktls": false 00:36:55.047 } 00:36:55.047 }, 00:36:55.047 { 00:36:55.047 "method": "sock_impl_set_options", 00:36:55.047 "params": { 00:36:55.047 "impl_name": "posix", 00:36:55.047 "recv_buf_size": 2097152, 00:36:55.047 "send_buf_size": 2097152, 00:36:55.047 "enable_recv_pipe": true, 00:36:55.047 "enable_quickack": false, 00:36:55.047 "enable_placement_id": 0, 00:36:55.048 "enable_zerocopy_send_server": true, 00:36:55.048 "enable_zerocopy_send_client": false, 00:36:55.048 "zerocopy_threshold": 0, 00:36:55.048 "tls_version": 0, 00:36:55.048 "enable_ktls": false 00:36:55.048 } 00:36:55.048 } 00:36:55.048 ] 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "subsystem": "vmd", 00:36:55.048 "config": [] 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "subsystem": "accel", 00:36:55.048 "config": [ 00:36:55.048 { 00:36:55.048 "method": "accel_set_options", 00:36:55.048 "params": { 00:36:55.048 "small_cache_size": 128, 00:36:55.048 "large_cache_size": 16, 00:36:55.048 "task_count": 2048, 00:36:55.048 "sequence_count": 2048, 00:36:55.048 "buf_count": 2048 00:36:55.048 } 00:36:55.048 } 00:36:55.048 ] 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "subsystem": "bdev", 00:36:55.048 "config": [ 00:36:55.048 { 00:36:55.048 "method": "bdev_set_options", 00:36:55.048 "params": { 00:36:55.048 "bdev_io_pool_size": 65535, 00:36:55.048 "bdev_io_cache_size": 256, 00:36:55.048 "bdev_auto_examine": true, 00:36:55.048 "iobuf_small_cache_size": 128, 00:36:55.048 "iobuf_large_cache_size": 16 00:36:55.048 } 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "method": "bdev_raid_set_options", 00:36:55.048 "params": { 00:36:55.048 "process_window_size_kb": 1024 00:36:55.048 } 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "method": "bdev_iscsi_set_options", 00:36:55.048 "params": { 00:36:55.048 "timeout_sec": 30 00:36:55.048 } 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "method": "bdev_nvme_set_options", 00:36:55.048 "params": { 00:36:55.048 "action_on_timeout": "none", 00:36:55.048 "timeout_us": 0, 00:36:55.048 "timeout_admin_us": 0, 00:36:55.048 "keep_alive_timeout_ms": 10000, 00:36:55.048 "arbitration_burst": 0, 00:36:55.048 "low_priority_weight": 0, 00:36:55.048 "medium_priority_weight": 0, 00:36:55.048 "high_priority_weight": 0, 00:36:55.048 "nvme_adminq_poll_period_us": 10000, 00:36:55.048 "nvme_ioq_poll_period_us": 0, 00:36:55.048 "io_queue_requests": 512, 00:36:55.048 "delay_cmd_submit": true, 00:36:55.048 "transport_retry_count": 4, 00:36:55.048 "bdev_retry_count": 3, 00:36:55.048 "transport_ack_timeout": 0, 00:36:55.048 "ctrlr_loss_timeout_sec": 0, 00:36:55.048 "reconnect_delay_sec": 0, 00:36:55.048 "fast_io_fail_timeout_sec": 0, 00:36:55.048 "disable_auto_failback": false, 00:36:55.048 "generate_uuids": false, 00:36:55.048 "transport_tos": 0, 00:36:55.048 "nvme_error_stat": false, 00:36:55.048 "rdma_srq_size": 0, 00:36:55.048 "io_path_stat": false, 00:36:55.048 "allow_accel_sequence": false, 00:36:55.048 "rdma_max_cq_size": 0, 00:36:55.048 "rdma_cm_event_timeout_ms": 0, 00:36:55.048 "dhchap_digests": [ 00:36:55.048 "sha256", 00:36:55.048 "sha384", 00:36:55.048 "sha512" 00:36:55.048 ], 00:36:55.048 "dhchap_dhgroups": [ 00:36:55.048 "null", 00:36:55.048 "ffdhe2048", 00:36:55.048 "ffdhe3072", 00:36:55.048 "ffdhe4096", 00:36:55.048 "ffdhe6144", 00:36:55.048 "ffdhe8192" 00:36:55.048 ] 00:36:55.048 } 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "method": "bdev_nvme_attach_controller", 00:36:55.048 "params": { 00:36:55.048 "name": "nvme0", 00:36:55.048 "trtype": "TCP", 00:36:55.048 "adrfam": "IPv4", 00:36:55.048 "traddr": "127.0.0.1", 00:36:55.048 "trsvcid": "4420", 00:36:55.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:55.048 "prchk_reftag": false, 00:36:55.048 "prchk_guard": false, 00:36:55.048 "ctrlr_loss_timeout_sec": 0, 00:36:55.048 "reconnect_delay_sec": 0, 00:36:55.048 "fast_io_fail_timeout_sec": 0, 00:36:55.048 "psk": "key0", 00:36:55.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:55.048 "hdgst": false, 00:36:55.048 "ddgst": false 00:36:55.048 } 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "method": "bdev_nvme_set_hotplug", 00:36:55.048 "params": { 00:36:55.048 "period_us": 100000, 00:36:55.048 "enable": false 00:36:55.048 } 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "method": "bdev_wait_for_examine" 00:36:55.048 } 00:36:55.048 ] 00:36:55.048 }, 00:36:55.048 { 00:36:55.048 "subsystem": "nbd", 00:36:55.048 "config": [] 00:36:55.048 } 00:36:55.048 ] 00:36:55.048 }' 00:36:55.048 04:53:15 keyring_file -- keyring/file.sh@114 -- # killprocess 2975666 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2975666 ']' 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2975666 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2975666 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2975666' 00:36:55.048 killing process with pid 2975666 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@965 -- # kill 2975666 00:36:55.048 Received shutdown signal, test time was about 1.000000 seconds 00:36:55.048 00:36:55.048 Latency(us) 00:36:55.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.048 =================================================================================================================== 00:36:55.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:55.048 04:53:15 keyring_file -- common/autotest_common.sh@970 -- # wait 2975666 00:36:55.307 04:53:15 keyring_file -- keyring/file.sh@117 -- # bperfpid=2977113 00:36:55.307 04:53:15 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2977113 /var/tmp/bperf.sock 00:36:55.307 04:53:15 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2977113 ']' 00:36:55.307 04:53:15 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:55.307 04:53:15 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:55.307 04:53:15 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:55.307 04:53:15 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:55.307 04:53:15 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:55.307 "subsystems": [ 00:36:55.307 { 00:36:55.307 "subsystem": "keyring", 00:36:55.307 "config": [ 00:36:55.307 { 00:36:55.307 "method": "keyring_file_add_key", 00:36:55.307 "params": { 00:36:55.307 "name": "key0", 00:36:55.307 "path": "/tmp/tmp.NRzyK6vOpD" 00:36:55.307 } 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "method": "keyring_file_add_key", 00:36:55.307 "params": { 00:36:55.307 "name": "key1", 00:36:55.307 "path": "/tmp/tmp.eQdfeCSlIV" 00:36:55.307 } 00:36:55.307 } 00:36:55.307 ] 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "subsystem": "iobuf", 00:36:55.307 "config": [ 00:36:55.307 { 00:36:55.307 "method": "iobuf_set_options", 00:36:55.307 "params": { 00:36:55.307 "small_pool_count": 8192, 00:36:55.307 "large_pool_count": 1024, 00:36:55.307 "small_bufsize": 8192, 00:36:55.307 "large_bufsize": 135168 00:36:55.307 } 00:36:55.307 } 00:36:55.307 ] 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "subsystem": "sock", 00:36:55.307 "config": [ 00:36:55.307 { 00:36:55.307 "method": "sock_set_default_impl", 00:36:55.307 "params": { 00:36:55.307 "impl_name": "posix" 00:36:55.307 } 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "method": "sock_impl_set_options", 00:36:55.307 "params": { 00:36:55.307 "impl_name": "ssl", 00:36:55.307 "recv_buf_size": 4096, 00:36:55.307 "send_buf_size": 4096, 00:36:55.307 "enable_recv_pipe": true, 00:36:55.307 "enable_quickack": false, 00:36:55.307 "enable_placement_id": 0, 00:36:55.307 "enable_zerocopy_send_server": true, 00:36:55.307 "enable_zerocopy_send_client": false, 00:36:55.307 "zerocopy_threshold": 0, 00:36:55.307 "tls_version": 0, 00:36:55.307 "enable_ktls": false 00:36:55.307 } 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "method": "sock_impl_set_options", 00:36:55.307 "params": { 00:36:55.307 "impl_name": "posix", 00:36:55.307 "recv_buf_size": 2097152, 00:36:55.307 "send_buf_size": 2097152, 00:36:55.307 "enable_recv_pipe": true, 00:36:55.307 "enable_quickack": false, 00:36:55.307 "enable_placement_id": 0, 00:36:55.307 "enable_zerocopy_send_server": true, 00:36:55.307 "enable_zerocopy_send_client": false, 00:36:55.307 "zerocopy_threshold": 0, 00:36:55.307 "tls_version": 0, 00:36:55.307 "enable_ktls": false 00:36:55.307 } 00:36:55.307 } 00:36:55.307 ] 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "subsystem": "vmd", 00:36:55.307 "config": [] 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "subsystem": "accel", 00:36:55.307 "config": [ 00:36:55.307 { 00:36:55.307 "method": "accel_set_options", 00:36:55.307 "params": { 00:36:55.307 "small_cache_size": 128, 00:36:55.307 "large_cache_size": 16, 00:36:55.307 "task_count": 2048, 00:36:55.307 "sequence_count": 2048, 00:36:55.307 "buf_count": 2048 00:36:55.307 } 00:36:55.307 } 00:36:55.307 ] 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "subsystem": "bdev", 00:36:55.307 "config": [ 00:36:55.307 { 00:36:55.307 "method": "bdev_set_options", 00:36:55.307 "params": { 00:36:55.307 "bdev_io_pool_size": 65535, 00:36:55.307 "bdev_io_cache_size": 256, 00:36:55.307 "bdev_auto_examine": true, 00:36:55.307 "iobuf_small_cache_size": 128, 00:36:55.307 "iobuf_large_cache_size": 16 00:36:55.307 } 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "method": "bdev_raid_set_options", 00:36:55.307 "params": { 00:36:55.307 "process_window_size_kb": 1024 00:36:55.307 } 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "method": "bdev_iscsi_set_options", 00:36:55.307 "params": { 00:36:55.307 "timeout_sec": 30 00:36:55.307 } 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "method": "bdev_nvme_set_options", 00:36:55.307 "params": { 00:36:55.307 "action_on_timeout": "none", 00:36:55.307 "timeout_us": 0, 00:36:55.307 "timeout_admin_us": 0, 00:36:55.307 "keep_alive_timeout_ms": 10000, 00:36:55.307 "arbitration_burst": 0, 00:36:55.307 "low_priority_weight": 0, 00:36:55.307 "medium_priority_weight": 0, 00:36:55.307 "high_priority_weight": 0, 00:36:55.307 "nvme_adminq_poll_period_us": 10000, 00:36:55.307 "nvme_ioq_poll_period_us": 0, 00:36:55.307 "io_queue_requests": 512, 00:36:55.307 "delay_cmd_submit": true, 00:36:55.307 "transport_retry_count": 4, 00:36:55.307 "bdev_retry_count": 3, 00:36:55.307 "transport_ack_timeout": 0, 00:36:55.307 "ctrlr_loss_timeout_sec": 0, 00:36:55.307 "reconnect_delay_sec": 0, 00:36:55.307 "fast_io_fail_timeout_sec": 0, 00:36:55.307 "disable_auto_failback": false, 00:36:55.307 "generate_uuids": false, 00:36:55.307 "transport_tos": 0, 00:36:55.307 "nvme_error_stat": false, 00:36:55.307 "rdma_srq_size": 0, 00:36:55.307 "io_path_stat": false, 00:36:55.307 "allow_accel_sequence": false, 00:36:55.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:55.307 "rdma_max_cq_size": 0, 00:36:55.307 "rdma_cm_event_timeout_ms": 0, 00:36:55.307 "dhchap_digests": [ 00:36:55.307 "sha256", 00:36:55.307 "sha384", 00:36:55.307 "sha512" 00:36:55.307 ], 00:36:55.307 "dhchap_dhgroups": [ 00:36:55.307 "null", 00:36:55.307 "ffdhe2048", 00:36:55.307 "ffdhe3072", 00:36:55.307 "ffdhe4096", 00:36:55.307 "ffdhe6144", 00:36:55.307 "ffdhe8192" 00:36:55.307 ] 00:36:55.307 } 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "method": "bdev_nvme_attach_controller", 00:36:55.307 "params": { 00:36:55.307 "name": "nvme0", 00:36:55.307 "trtype": "TCP", 00:36:55.307 "adrfam": "IPv4", 00:36:55.307 "traddr": "127.0.0.1", 00:36:55.307 "trsvcid": "4420", 00:36:55.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:55.307 "prchk_reftag": false, 00:36:55.307 "prchk_guard": false, 00:36:55.307 "ctrlr_loss_timeout_sec": 0, 00:36:55.307 "reconnect_delay_sec": 0, 00:36:55.307 "fast_io_fail_timeout_sec": 0, 00:36:55.307 "psk": "key0", 00:36:55.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:55.307 "hdgst": false, 00:36:55.307 "ddgst": false 00:36:55.307 } 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "method": "bdev_nvme_set_hotplug", 00:36:55.307 "params": { 00:36:55.307 "period_us": 100000, 00:36:55.307 "enable": false 00:36:55.307 } 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "method": "bdev_wait_for_examine" 00:36:55.307 } 00:36:55.307 ] 00:36:55.307 }, 00:36:55.307 { 00:36:55.307 "subsystem": "nbd", 00:36:55.307 "config": [] 00:36:55.307 } 00:36:55.307 ] 00:36:55.307 }' 00:36:55.307 04:53:15 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:55.307 04:53:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:55.566 [2024-07-14 04:53:15.500762] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:55.566 [2024-07-14 04:53:15.500843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977113 ] 00:36:55.566 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.566 [2024-07-14 04:53:15.562057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.566 [2024-07-14 04:53:15.654034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.823 [2024-07-14 04:53:15.834383] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:56.416 04:53:16 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:56.416 04:53:16 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:56.416 04:53:16 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:56.416 04:53:16 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:56.416 04:53:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.674 04:53:16 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:56.674 04:53:16 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:56.674 04:53:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:56.674 04:53:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.674 04:53:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.674 04:53:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.674 04:53:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.932 04:53:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:56.932 04:53:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:56.932 04:53:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:56.932 04:53:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.932 04:53:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.932 04:53:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.932 04:53:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:57.190 04:53:17 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:57.190 04:53:17 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:57.190 04:53:17 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:57.190 04:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:57.447 04:53:17 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:57.447 04:53:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:57.447 04:53:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.NRzyK6vOpD /tmp/tmp.eQdfeCSlIV 00:36:57.447 04:53:17 keyring_file -- keyring/file.sh@20 -- # killprocess 2977113 00:36:57.447 04:53:17 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2977113 ']' 00:36:57.448 04:53:17 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2977113 00:36:57.448 04:53:17 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:57.448 04:53:17 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:57.448 04:53:17 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2977113 00:36:57.448 04:53:17 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:57.448 04:53:17 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:57.448 04:53:17 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2977113' 00:36:57.448 killing process with pid 2977113 00:36:57.448 04:53:17 keyring_file -- common/autotest_common.sh@965 -- # kill 2977113 00:36:57.448 Received shutdown signal, test time was about 1.000000 seconds 00:36:57.448 00:36:57.448 Latency(us) 00:36:57.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.448 =================================================================================================================== 00:36:57.448 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:57.448 04:53:17 keyring_file -- common/autotest_common.sh@970 -- # wait 2977113 00:36:57.706 04:53:17 keyring_file -- keyring/file.sh@21 -- # killprocess 2975651 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2975651 ']' 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2975651 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2975651 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2975651' 00:36:57.706 killing process with pid 2975651 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@965 -- # kill 2975651 00:36:57.706 [2024-07-14 04:53:17.691611] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:57.706 04:53:17 keyring_file -- common/autotest_common.sh@970 -- # wait 2975651 00:36:57.965 00:36:57.965 real 0m14.026s 00:36:57.965 user 0m34.579s 00:36:57.965 sys 0m3.278s 00:36:57.965 04:53:18 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:57.965 04:53:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:57.965 ************************************ 00:36:57.965 END TEST keyring_file 00:36:57.965 ************************************ 00:36:57.965 04:53:18 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:57.965 04:53:18 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:57.965 04:53:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:57.965 04:53:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:57.965 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:36:57.965 ************************************ 00:36:57.965 START TEST keyring_linux 00:36:57.965 ************************************ 00:36:57.965 04:53:18 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:57.965 * Looking for test storage... 00:36:58.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:58.224 04:53:18 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:58.224 04:53:18 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:58.224 04:53:18 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:58.224 04:53:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.224 04:53:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.224 04:53:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.224 04:53:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:58.224 04:53:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:58.224 /tmp/:spdk-test:key0 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:58.224 04:53:18 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:58.224 04:53:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:58.224 /tmp/:spdk-test:key1 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2977479 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:58.224 04:53:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2977479 00:36:58.224 04:53:18 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 2977479 ']' 00:36:58.224 04:53:18 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.224 04:53:18 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:58.224 04:53:18 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.224 04:53:18 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:58.224 04:53:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:58.224 [2024-07-14 04:53:18.296516] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:58.224 [2024-07-14 04:53:18.296599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977479 ] 00:36:58.224 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.224 [2024-07-14 04:53:18.354211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.483 [2024-07-14 04:53:18.439443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.483 04:53:18 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:58.483 04:53:18 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:58.483 04:53:18 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:58.483 04:53:18 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:58.741 04:53:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:58.741 [2024-07-14 04:53:18.679679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:58.741 null0 00:36:58.741 [2024-07-14 04:53:18.711721] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:58.741 [2024-07-14 04:53:18.712194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:58.741 04:53:18 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:58.741 04:53:18 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:58.741 739860372 00:36:58.741 04:53:18 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:58.741 96902543 00:36:58.741 04:53:18 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2977559 00:36:58.741 04:53:18 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:58.741 04:53:18 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2977559 /var/tmp/bperf.sock 00:36:58.741 04:53:18 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 2977559 ']' 00:36:58.741 04:53:18 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:58.741 04:53:18 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:58.741 04:53:18 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:58.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:58.741 04:53:18 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:58.741 04:53:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:58.741 [2024-07-14 04:53:18.779947] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:58.741 [2024-07-14 04:53:18.780022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977559 ] 00:36:58.741 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.741 [2024-07-14 04:53:18.845687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.999 [2024-07-14 04:53:18.936827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.999 04:53:18 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:58.999 04:53:18 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:58.999 04:53:18 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:58.999 04:53:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:59.256 04:53:19 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:59.256 04:53:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:59.515 04:53:19 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:59.515 04:53:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:59.773 [2024-07-14 04:53:19.786786] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:59.773 nvme0n1 00:36:59.773 04:53:19 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:59.773 04:53:19 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:59.773 04:53:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:59.773 04:53:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:59.773 04:53:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.773 04:53:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:00.031 04:53:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:00.031 04:53:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:00.031 04:53:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:00.031 04:53:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:00.031 04:53:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.031 04:53:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.031 04:53:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:00.289 04:53:20 keyring_linux -- keyring/linux.sh@25 -- # sn=739860372 00:37:00.289 04:53:20 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:00.289 04:53:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:00.289 04:53:20 keyring_linux -- keyring/linux.sh@26 -- # [[ 739860372 == \7\3\9\8\6\0\3\7\2 ]] 00:37:00.289 04:53:20 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 739860372 00:37:00.289 04:53:20 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:00.289 04:53:20 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:00.289 Running I/O for 1 seconds... 00:37:01.663 00:37:01.663 Latency(us) 00:37:01.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.663 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:01.663 nvme0n1 : 1.03 3238.55 12.65 0.00 0.00 39009.89 7767.23 49127.73 00:37:01.663 =================================================================================================================== 00:37:01.663 Total : 3238.55 12.65 0.00 0.00 39009.89 7767.23 49127.73 00:37:01.663 0 00:37:01.663 04:53:21 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:01.663 04:53:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:01.663 04:53:21 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:01.663 04:53:21 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:01.663 04:53:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:01.663 04:53:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:01.663 04:53:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.663 04:53:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:01.920 04:53:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:01.920 04:53:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:01.920 04:53:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:01.920 04:53:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:01.920 04:53:22 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:01.920 04:53:22 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:01.920 04:53:22 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:01.920 04:53:22 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:01.920 04:53:22 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:01.920 04:53:22 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:01.920 04:53:22 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:01.920 04:53:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:02.178 [2024-07-14 04:53:22.264731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:02.178 [2024-07-14 04:53:22.265232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1746700 (107): Transport endpoint is not connected 00:37:02.178 [2024-07-14 04:53:22.266222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1746700 (9): Bad file descriptor 00:37:02.178 [2024-07-14 04:53:22.267222] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:02.178 [2024-07-14 04:53:22.267245] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:02.178 [2024-07-14 04:53:22.267260] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:02.178 request: 00:37:02.178 { 00:37:02.178 "name": "nvme0", 00:37:02.178 "trtype": "tcp", 00:37:02.178 "traddr": "127.0.0.1", 00:37:02.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.178 "adrfam": "ipv4", 00:37:02.178 "trsvcid": "4420", 00:37:02.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.178 "psk": ":spdk-test:key1", 00:37:02.178 "method": "bdev_nvme_attach_controller", 00:37:02.178 "req_id": 1 00:37:02.178 } 00:37:02.178 Got JSON-RPC error response 00:37:02.178 response: 00:37:02.178 { 00:37:02.178 "code": -5, 00:37:02.178 "message": "Input/output error" 00:37:02.178 } 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@33 -- # sn=739860372 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 739860372 00:37:02.178 1 links removed 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@33 -- # sn=96902543 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 96902543 00:37:02.178 1 links removed 00:37:02.178 04:53:22 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2977559 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 2977559 ']' 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 2977559 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2977559 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2977559' 00:37:02.178 killing process with pid 2977559 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@965 -- # kill 2977559 00:37:02.178 Received shutdown signal, test time was about 1.000000 seconds 00:37:02.178 00:37:02.178 Latency(us) 00:37:02.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.178 =================================================================================================================== 00:37:02.178 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:02.178 04:53:22 keyring_linux -- common/autotest_common.sh@970 -- # wait 2977559 00:37:02.437 04:53:22 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2977479 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 2977479 ']' 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 2977479 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2977479 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2977479' 00:37:02.437 killing process with pid 2977479 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@965 -- # kill 2977479 00:37:02.437 04:53:22 keyring_linux -- common/autotest_common.sh@970 -- # wait 2977479 00:37:03.004 00:37:03.005 real 0m4.808s 00:37:03.005 user 0m9.029s 00:37:03.005 sys 0m1.436s 00:37:03.005 04:53:22 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:03.005 04:53:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:03.005 ************************************ 00:37:03.005 END TEST keyring_linux 00:37:03.005 ************************************ 00:37:03.005 04:53:22 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:03.005 04:53:22 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:03.005 04:53:22 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:03.005 04:53:22 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:03.005 04:53:22 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:03.005 04:53:22 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:03.005 04:53:22 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:03.005 04:53:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:03.005 04:53:22 -- common/autotest_common.sh@10 -- # set +x 00:37:03.005 04:53:22 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:03.005 04:53:22 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:03.005 04:53:22 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:03.005 04:53:22 -- common/autotest_common.sh@10 -- # set +x 00:37:04.905 INFO: APP EXITING 00:37:04.905 INFO: killing all VMs 00:37:04.905 INFO: killing vhost app 00:37:04.905 INFO: EXIT DONE 00:37:05.838 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:05.839 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:05.839 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:05.839 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:05.839 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:05.839 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:06.096 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:06.096 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:06.096 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:06.096 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:06.096 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:06.096 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:06.096 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:06.096 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:06.096 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:06.096 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:06.096 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:07.471 Cleaning 00:37:07.471 Removing: /var/run/dpdk/spdk0/config 00:37:07.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:07.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:07.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:07.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:07.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:07.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:07.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:07.471 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:07.471 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:07.471 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:07.471 Removing: /var/run/dpdk/spdk1/config 00:37:07.471 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:07.471 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:07.471 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:07.471 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:07.471 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:07.471 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:07.471 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:07.471 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:07.471 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:07.471 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:07.471 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:07.471 Removing: /var/run/dpdk/spdk2/config 00:37:07.471 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:07.471 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:07.471 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:07.471 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:07.471 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:07.471 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:07.471 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:07.471 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:07.471 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:07.471 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:07.471 Removing: /var/run/dpdk/spdk3/config 00:37:07.471 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:07.471 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:07.471 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:07.471 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:07.471 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:07.471 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:07.471 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:07.471 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:07.471 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:07.471 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:07.471 Removing: /var/run/dpdk/spdk4/config 00:37:07.471 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:07.471 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:07.471 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:07.471 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:07.471 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:07.471 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:07.471 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:07.471 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:07.471 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:07.471 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:07.471 Removing: /dev/shm/bdev_svc_trace.1 00:37:07.471 Removing: /dev/shm/nvmf_trace.0 00:37:07.471 Removing: /dev/shm/spdk_tgt_trace.pid2657618 00:37:07.471 Removing: /var/run/dpdk/spdk0 00:37:07.471 Removing: /var/run/dpdk/spdk1 00:37:07.471 Removing: /var/run/dpdk/spdk2 00:37:07.471 Removing: /var/run/dpdk/spdk3 00:37:07.471 Removing: /var/run/dpdk/spdk4 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2655968 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2656699 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2657618 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2657949 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2658634 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2658777 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2659499 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2659536 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2659748 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2661057 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2661872 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2662158 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2662350 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2662552 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2662740 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2662898 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2663056 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2663349 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2663879 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2666182 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2666446 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2666615 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2666619 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2667002 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2667053 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2667361 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2667487 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2667655 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2667700 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2667954 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2667960 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2668323 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2668481 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2668799 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2668889 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2668996 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2669057 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2669339 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2669492 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2669643 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2669893 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2670078 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2670237 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2670388 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2670666 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2670826 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2670982 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2671144 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2671411 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2671571 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2671722 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2671992 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2672157 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2672319 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2672473 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2672761 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2672925 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2673109 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2673313 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2675242 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2729467 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2732079 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2739556 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2742819 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2745269 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2745695 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2752926 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2752928 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2753482 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2754125 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2754784 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2755179 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2755191 00:37:07.471 Removing: /var/run/dpdk/spdk_pid2755442 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2755460 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2755577 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2756118 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2756776 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2757433 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2757828 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2757842 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2757981 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2758863 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2759579 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2764924 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2765076 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2767704 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2771891 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2774058 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2780321 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2785504 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2786695 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2787357 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2797410 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2799560 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2824902 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2827676 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2828777 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2830451 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2830756 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2830940 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2830958 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2831391 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2832706 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2833308 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2833736 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2835342 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2835644 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2836206 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2838592 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2841847 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2845372 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2868861 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2871559 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2875397 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2876358 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2877443 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2880002 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2882320 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2886432 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2886440 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2889202 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2889374 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2889591 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2889855 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2889866 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2891167 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2892849 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2894026 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2895206 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2896386 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2897560 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2901385 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2901713 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2902993 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2903730 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2907427 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2909292 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2912705 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2916015 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2922838 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2927191 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2927194 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2939372 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2939775 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2940183 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2940696 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2941167 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2941700 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2942101 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2942510 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2944995 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2945143 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2948920 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2948980 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2950694 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2956218 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2956224 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2959117 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2960435 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2961907 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2962649 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2964157 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2964930 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2970265 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2970588 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2970982 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2972534 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2972873 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2973210 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2975651 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2975666 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2977113 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2977479 00:37:07.730 Removing: /var/run/dpdk/spdk_pid2977559 00:37:07.730 Clean 00:37:07.988 04:53:27 -- common/autotest_common.sh@1447 -- # return 0 00:37:07.988 04:53:27 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:07.988 04:53:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:07.988 04:53:27 -- common/autotest_common.sh@10 -- # set +x 00:37:07.988 04:53:28 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:07.988 04:53:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:07.988 04:53:28 -- common/autotest_common.sh@10 -- # set +x 00:37:07.988 04:53:28 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:07.989 04:53:28 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:07.989 04:53:28 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:07.989 04:53:28 -- spdk/autotest.sh@391 -- # hash lcov 00:37:07.989 04:53:28 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:07.989 04:53:28 -- spdk/autotest.sh@393 -- # hostname 00:37:07.989 04:53:28 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:08.246 geninfo: WARNING: invalid characters removed from testname! 00:37:40.366 04:53:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:40.366 04:53:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:42.268 04:54:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:45.553 04:54:05 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:48.087 04:54:08 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:51.391 04:54:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:53.923 04:54:14 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:53.923 04:54:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:53.923 04:54:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:53.923 04:54:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:53.923 04:54:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:53.923 04:54:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.923 04:54:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.923 04:54:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.923 04:54:14 -- paths/export.sh@5 -- $ export PATH 00:37:53.923 04:54:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.923 04:54:14 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:53.923 04:54:14 -- common/autobuild_common.sh@437 -- $ date +%s 00:37:53.923 04:54:14 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720925654.XXXXXX 00:37:53.923 04:54:14 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720925654.E63t0t 00:37:53.923 04:54:14 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:37:53.923 04:54:14 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:37:53.923 04:54:14 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:53.923 04:54:14 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:53.923 04:54:14 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:53.923 04:54:14 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:53.924 04:54:14 -- common/autobuild_common.sh@453 -- $ get_config_params 00:37:53.924 04:54:14 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:53.924 04:54:14 -- common/autotest_common.sh@10 -- $ set +x 00:37:53.924 04:54:14 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:53.924 04:54:14 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:37:53.924 04:54:14 -- pm/common@17 -- $ local monitor 00:37:53.924 04:54:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:53.924 04:54:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:53.924 04:54:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:53.924 04:54:14 -- pm/common@21 -- $ date +%s 00:37:53.924 04:54:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:53.924 04:54:14 -- pm/common@21 -- $ date +%s 00:37:53.924 04:54:14 -- pm/common@25 -- $ sleep 1 00:37:53.924 04:54:14 -- pm/common@21 -- $ date +%s 00:37:53.924 04:54:14 -- pm/common@21 -- $ date +%s 00:37:53.924 04:54:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720925654 00:37:53.924 04:54:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720925654 00:37:53.924 04:54:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720925654 00:37:53.924 04:54:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720925654 00:37:54.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720925654_collect-vmstat.pm.log 00:37:54.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720925654_collect-cpu-load.pm.log 00:37:54.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720925654_collect-cpu-temp.pm.log 00:37:54.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720925654_collect-bmc-pm.bmc.pm.log 00:37:55.119 04:54:15 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:37:55.119 04:54:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:55.119 04:54:15 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:55.119 04:54:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:55.119 04:54:15 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:55.119 04:54:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:55.119 04:54:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:55.119 04:54:15 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:55.119 04:54:15 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:55.119 04:54:15 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:55.119 04:54:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:55.119 04:54:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:55.119 04:54:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:55.119 04:54:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:55.119 04:54:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:55.119 04:54:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:55.119 04:54:15 -- pm/common@44 -- $ pid=2989495 00:37:55.119 04:54:15 -- pm/common@50 -- $ kill -TERM 2989495 00:37:55.119 04:54:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:55.119 04:54:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:55.119 04:54:15 -- pm/common@44 -- $ pid=2989497 00:37:55.119 04:54:15 -- pm/common@50 -- $ kill -TERM 2989497 00:37:55.119 04:54:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:55.119 04:54:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:55.119 04:54:15 -- pm/common@44 -- $ pid=2989499 00:37:55.119 04:54:15 -- pm/common@50 -- $ kill -TERM 2989499 00:37:55.119 04:54:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:55.119 04:54:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:55.119 04:54:15 -- pm/common@44 -- $ pid=2989530 00:37:55.119 04:54:15 -- pm/common@50 -- $ sudo -E kill -TERM 2989530 00:37:55.119 + [[ -n 2551534 ]] 00:37:55.119 + sudo kill 2551534 00:37:55.128 [Pipeline] } 00:37:55.146 [Pipeline] // stage 00:37:55.151 [Pipeline] } 00:37:55.167 [Pipeline] // timeout 00:37:55.172 [Pipeline] } 00:37:55.188 [Pipeline] // catchError 00:37:55.193 [Pipeline] } 00:37:55.210 [Pipeline] // wrap 00:37:55.215 [Pipeline] } 00:37:55.230 [Pipeline] // catchError 00:37:55.238 [Pipeline] stage 00:37:55.240 [Pipeline] { (Epilogue) 00:37:55.253 [Pipeline] catchError 00:37:55.254 [Pipeline] { 00:37:55.264 [Pipeline] echo 00:37:55.265 Cleanup processes 00:37:55.269 [Pipeline] sh 00:37:55.575 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:55.575 2989643 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:55.575 2989759 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:55.590 [Pipeline] sh 00:37:55.896 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:55.896 ++ grep -v 'sudo pgrep' 00:37:55.896 ++ awk '{print $1}' 00:37:55.896 + sudo kill -9 2989643 00:37:55.907 [Pipeline] sh 00:37:56.187 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:06.164 [Pipeline] sh 00:38:06.448 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:06.448 Artifacts sizes are good 00:38:06.463 [Pipeline] archiveArtifacts 00:38:06.470 Archiving artifacts 00:38:06.681 [Pipeline] sh 00:38:06.963 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:06.979 [Pipeline] cleanWs 00:38:06.988 [WS-CLEANUP] Deleting project workspace... 00:38:06.988 [WS-CLEANUP] Deferred wipeout is used... 00:38:06.995 [WS-CLEANUP] done 00:38:06.997 [Pipeline] } 00:38:07.017 [Pipeline] // catchError 00:38:07.030 [Pipeline] sh 00:38:07.314 + logger -p user.info -t JENKINS-CI 00:38:07.324 [Pipeline] } 00:38:07.340 [Pipeline] // stage 00:38:07.345 [Pipeline] } 00:38:07.363 [Pipeline] // node 00:38:07.369 [Pipeline] End of Pipeline 00:38:07.404 Finished: SUCCESS